Connect with us
DAPA Banner

Tech

These are the best looks at the DJI Osmo Pocket 4P yet

Published

on

DJI’s first Pro-tier Osmo Pocket camera has surfaced in dozens of leaked images ahead of its official launch announcement, suggesting the company is already briefing influencers and press as part of a pre-release campaign for the Osmo Pocket 4P.

The images, reported by leaker Igor Bogdanov on X, show the Osmo Pocket 4P being tested alongside both the recently launched Osmo Pocket 4 and the older Osmo Pocket 3, offering a direct size and design comparison between all three generations of the compact vlogging camera line.

That side-by-side framing points to a marketing strategy built around showcasing the 4P as a step above the standard Osmo Pocket 4, which DJI released across most major markets last week as a direct replacement for the Osmo Pocket 3 at a retail price of $499.

The 4P is understood to separate itself from that standard model through a 1-inch primary sensor and a telephoto lens with 3x optical zoom, a hardware combination that would place it closer to DJI’s action camera range in terms of imaging capability than a typical compact vlogging device.

Advertisement

DJI has so far only officially acknowledged the Osmo Pocket 4P in China, though reports suggest the camera could reach the US market under a different brand name, a route the company has previously used to navigate regulatory restrictions on its products in that territory.

The Osmo Pocket 4P launch follows an unusually active stretch for DJI, with the Mic Mini 2, Osmo Mobile 8P, and two new Lito drones all arriving within the same period as the standard Osmo Pocket 4, reflecting a product pipeline the company appears to be clearing ahead of a broader summer cycle.

Advertisement

DJI has not confirmed pricing or a release date for the Osmo Pocket 4P in any market, though the maturity of the influencer testing phase, as seen in the leaked images, suggests an official announcement is unlikely to be far off.

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Broken VECT 2.0 ransomware acts as a data wiper for large files

Published

on

Broken VECT 2.0 ransomware acts as a data wiper for large files

Researchers are warning that the VECT 2.0 ransomware has a problem in the way it handles encryption nonces that leads to permanently destroying larger files rather than encrypt them.

VECT has been advertised on one of the latest BreachForums iterations, inviting registered users to become affiliates, and distributing access keys via private messages  to those who showed interest.

At some point, VECT operators announced a partnership with TeamPCP, the threat group responsible for the recent supply-chain attacks impacting Trivy, LiteLLM, and Telnyx, as well as an attack against the European Commission.

image

In the announcement, VECT operators stated that their goal was to exploit victims of those supply-chain compromises, deploying ransomware payloads in their environments, as well as to conduct larger supply-chain attacks against other organizations.

VECT operators' post on BreachForums
VECT operators’ post on BreachForums
Source: Check Point

Faulty ransomware

While this is meant to increase encryption speed for larger files, because all chunk encryptions use the same memory buffer for the nonce output, each new nonce overwrites the previous one.

Once all chunks are processed, only the last nonce generated remains in memory, and only that one is written to disk.

Advertisement

As a result, the only portion of the file that is recoverable is the last 25%, with the previous three parts being impossible to decrypt, as the nonces have been lost.

Those lost nonces aren’t transmitted to the attacker either, so even if VECT operators wanted to decrypt the files for victims paying the ransom, they wouldn’t be able to.

Flawed nonce handling logic
Flawed nonce handling logic
Source: Check Point

While this is meant to increase encryption speed for larger files, because all chunk encryptions use the same memory buffer for the nonce output, each new nonce overwrites the previous one.

Once all chunks are processed, only the last nonce generated remains in memory, and only that one is written to disk.

As a result, the only portion of the file that is recoverable is the last 25%, with the previous three parts being impossible to decrypt, as the nonces have been lost.

Advertisement

Those lost nonces aren’t transmitted to the attacker either, so even if VECT operators wanted to decrypt the files for victims paying the ransom, they wouldn’t be able to.

The VECT 2.0 ransom note
The VECT 2.0 ransom note
Source: Check Point

Check Point notes that, since most valuable enterprise files, including VM disks, database files, and backups, are above 128kb, VECT’s impact as a data wiper can be catastrophic in most environments.

“At a threshold of only 128 KB, smaller than a typical email attachment or office document, what the code classifies as a large file encompasses not just VM disks, databases, and backups, but routine documents, spreadsheets, and mailboxes. In practice, almost nothing a victim would care to recover falls below this boundary,” Check Point says.

The researchers found that the same nonce-handling flaw is present across all variants of the VECT 2.0 ransomware, including Windows, Linux, and ESXi, so the same data-wiping behavior applies across all cases.


article image

AI chained four zero-days into one exploit that bypassed both renderer and OS sandboxes. A wave of new exploits is coming.

At the Autonomous Validation Summit (May 12 & 14), see how autonomous, context-rich validation finds what’s exploitable, proves controls hold, and closes the remediation loop.

Advertisement

Claim Your Spot

Source link

Continue Reading

Tech

Sniffies’ Users Worry About a ‘Straightification’ of the Gay Hookup App

Published

on

Of all the gay hookup apps Brennan Zubrick uses, Sniffies, a cruising app for men interested in discreet sex-positive casual encounters with other men, is by far his favorite. Some of the most popular kinks among members on the platform include edging, cum play, and BDSM. “I overwhelmingly prefer the experience I get and the community I can access,” he tells WIRED. But Zubrick, who is 40 and based in Washington, DC, has a bad feeling that could soon change.

Tinder and Hinge parent company Match Group announced on Monday an investment of $100 million into Sniffies. The deal gives Match Group a large minority share and the choice to become the sole owner later on. The announcement has set off an intense firestorm of reactions from users who are second-guessing the direction of the company and the longterm sustainability of the app.

“Sniffies has long held its market position as the little guy, catering to a specific section of the gay community, and is somewhere people who might not be comfortable with Grindr—where no face-pic, no-chat culture runs rampant—go to connect with other like-minded people in a more direct and discreet way,” Zubrick tells WIRED.

“This partnership is about supporting that, not redefining it,” Sniffies founder and CEO Blake Gallagher said in a statement, noting that the investment will help the platform focus on three key areas users want: “stronger trust and safety, expansive network growth, and continued product improvements.” According to the agreement, Match Group will offer guidance on the right roles, procedures, and tech to help Sniffies build on its trust and safety efforts.

Advertisement

But users aren’t buying what Gallagher is selling. The Instagram post announcing the news was inundated with negative reactions, as users expressed worry over the strategic partnership. “Please don’t let this be the straightification of sniffies,” expressed one. “You sold out. Plain and simple. Where we moving to next boys?” added Marc Sundstrom, a user in Philadelphia. “Partnering with Match feels very gentrified and straight. Highly concerned about the app being allowed to be what it is in order to court investors,” wrote another. By Tuesday afternoon, comments on the post had been shut off.

Though it remains to be seen how Gallagher will position Sniffies in the months ahead, already users are saying this marks the beginning of the end for the app. “Straight people shouldn’t even know what Sniffies is for fuck sake,” one wrote in the r/askgaybros subreddit. And despite promises, some say a major corporation like Match is not ethically aligned with the indie spirit of Sniffies. On LinkedIn, the top comment under Gallagher’s post questioned the real intent behind Match Group’s investment. “Interested to see how ties to Palantir affect Sniffies’ growth. Hopefully this doesn’t become a surveillance application.”

Spencer Rascoff, who became CEO of Match Group in 2025, previously served on the board of Palantir, the defense tech and data mining company that has become a “technological backbone” of the Trump administration.

Sniffies maintains that it will continue to own and control how its user data is stored, handled, and protected. According to the company, there are no changes planned to its data practices as part of the investment.

Advertisement

But the outrage underscores the significance of platforms like Sniffies and what it would mean to a community of people who already feel like they have so few quality options for seeking desire online.

“It’s a mess and obviously to be expected. It’s definitely an indicator of its fast rise, so no shade, but we saw what happened with Grindr,” says Brad Allen, a 34-year-old event producer and the creator behind Club Quarantine, who joined Sniffies in 2023. “I really am pulling for them to somehow navigate this differently since it’s essential to the cruising community now. Hopefully the pop-up Candy Crush ads don’t light up too much in the bushes.”

Source link

Advertisement
Continue Reading

Tech

GMKtec mini PCs are heavily discounted on Amazon right now

Published

on

Amazon has cut the price on a number of our favorite GMKtec mini PCs, offering big savings on top-rated compact systems that pack serious desktop power into tiny enclosures. These machines cover everything from demanding creative workloads to everyday office setups, making this one of the better mini PC sales I’ve seen recently.

Leading the pack is the GMKtec Nucbox K8 Plus with the Ryzen 7 8845HS CPU, down to $690 (was $950) at Amazon. With 32GB DDR5 RAM, a 512GB PCIe 4.0 SSD, and Radeon 780M graphics, it handles video editing, multitasking, and creative software with ease, while dual 2.5GbE ports and an Oculink interface open the door to external GPU upgrades or ultra-fast storage expansion.


Intel Core i3-10110U, it is reduced from $460 to $290, making it a great everyday workstation.

It comes with 16GB DDR4 RAM and 512GB storage, supports dual 4K displays, and includes WiFi 6, Bluetooth 5.2, and 2.5GbE networking, which suits office setups, remote workstations, and compact business deployments.

Advertisement

The M5 Ultra with Ryzen 7 7730U and 32GB RAM has dropped from $650 to $500, offering plenty of headroom for demanding multitasking and heavier productivity work.

A 1TB NVMe SSD and triple 4K display support make it well suited for creative professionals who juggle large files, multiple apps, and multi-screen layouts.

There’s another M5 Ultra configuration with the same CPU, which is down from $460 to $400, offering a balanced mix of speed and value.

This model includes 16GB RAM, 512GB storage, dual 2.5GbE networking, and WiFi 6E, delivering the flexibility needed for multitasking workloads, server duties, or professional desktop use in tight spaces.

Advertisement

In our review of the M5 Ultra mini PC we called it a “quiet, highly flexible option for a general-use PC.”

Finally, if you’re looking for something capable but more affordable, the Ryzen 5 3500U powered Nucbox G10 model is now priced at $300 instead of $400.

With 16GB RAM, a 512GB SSD, and triple 4K display capability, it fits nicely into home offices, media setups, or small business environments that need reliable performance without taking up desk space.

In our review we called it “a terrific little system that can easily be upgraded to handle larger tasks with more memory and storage.”

Advertisement

For more choices, take a look at our round up of the best mini PCs you can buy.

Source link

Continue Reading

Tech

Ireland has ‘one of the worst disability employment records’, finds report

Published

on

The ODI’s report urges CEOs and Government leaders to better implement systemic changes to integrate people with disabilities into the Irish workforce.

The Open Doors Initiative (ODI), an NGO that creates opportunities for marginalised people to enter the workforce, has launched the From Awareness to Action: Ireland’s Business and Policy Roadmap to Closing the Disability Employment Gap report. 

Developed in partnership with EY and informed by roundtables with business leaders, policymakers and individuals with lived experience, the report explores how, despite near full employment, Ireland maintains one of the highest disability employment gaps in the European Union.

According to the report, 22pc of people in Ireland live with a disability, yet less than half (49.3pc) of those who are of working age are employed, compared to 70.8pc of people living without a disability. This 21.5pc employment gap is among the largest in the EU, according to the ODI. 

Advertisement

The ODI is of the opinion that for businesses, this represents a significant missed opportunity, as companies leading in disability inclusion were found to have 28pc higher revenue and have twice the net income according to previous research from Accenture.

Commenting on the findings of the report, Jeanne McDonagh, the CEO of ODI said: “Ireland is facing a stark reality. Inaction in tackling this paradox further increases the risk of poverty and social exclusion for members of the disabled community. We can no longer view disability inclusion as a ‘social issue’ managed by the state through welfare. 

“It is a systemic failure within the labour market and a missed economic opportunity for Irish businesses. As a CEO with a disability myself, I stand here to advocate for the hiring of my peers. When barriers are removed and an equitable playing field created, people can work to their full potential.”

Changing the tone

The ODI said there needs to be a “fundamental shift from corporate social responsibility (CSR)”, which is often viewed as “charity”, to corporate social justice, which demands that businesses “actively dismantle systemic barriers within their core operations to ensure equity, dignity and justice”. 

Advertisement

“This involves designing workplaces for human diversity, building trust through transparent data, equipping managers with practical tools and crucially, placing disability representation at the leadership table,” said the ODI.

“The business case for inclusion is clear: diverse and equitable organisations are more adaptive, innovative and resilient,” said McDonagh. “They bring creativity, problem-solving and a different lens, all of which benefits the bottom line and strengthens stakeholder capitalism. This is not simply a matter of compliance, it is a strategic imperative.”

Onwards and upwards

The report offered a roadmap for how change can be implemented across the board, starting with three phases: foundation, embedment and transformation, as well as five priority recommendations for businesses and the Government. 

Among the recommended steps, the ODI’s report calls for the redesigning of recruitment and workplace systems for inclusion by default, wherein companies move beyond what can be seen as initial advocacy and truly embed the core concepts of accessibility and flexibility from the get go. 

Advertisement

The ODI also said there should be efforts to build employer and business trust via clear information and communication. Managers should be equipped with practical and proactive tools that move beyond limited awareness training and instead offer vital resources and clear guidelines. 

The report also suggested that Government action is needed to reduce the financial risks that come with employment for disabled people. This would involve decoupling essential supports from employment status and implementing a permanent, non-means tested cost of disability payment.

Lastly, the ODI suggested an increase in the visibility of disability leadership, wherein there are significant efforts put in to ensure that people with the lived experience of managing a disability are present in leadership and decision-making roles.

“Businesses play a pivotal role in driving this change, but Government initiatives are equally crucial,” said McDonagh. 

Advertisement

“By investing in education, addressing the cost of disability and simplifying support systems, policymakers can empower individuals with disabilities and enrich our society and economy. I urge you to join Open Doors as a partner and help us build on this work, ensuring Ireland becomes a leader in disability inclusion.”

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

4 Cool Perks You Didn’t Realize Came With Owning A Subaru

Published

on





Purchasing a new set of wheels can be fun, but many people also find it stressful. There are numerous factors to consider when you buy a new car (or even one that’s new to you). Once you determine your budget, you have to think about the size of the vehicle and cargo capacity, tech and safety features, and even the color. About half of us also tend to stay loyal to the brand we already own — if we find something we like, we tend to stick with it.

According to JD Power, SUV owners tend to be even more loyal to Subaru, with about 60% returning to the brand. The automaker provides perks to entice owners to stay with them, including trade-in and trade up programs that seek to help owners upgrade. But there’s more to enticing owners to stick with a brand than simply assisting them with their next vehicle purchase. To help incentivize repeat purchases and create a sense of community, Subaru offers owners additional exclusive benefits. Whether you own the popular Outback, the electric Solterra, or any other model from the automaker, here are four perks that you might not realize are available to all Subaru owners.

Advertisement

Discounted pet insurance

Your furry friend is an essential part of your family. Whether you share your life with a dog enjoying long walks or games of fetch or a cat that loves to snuggle on your lap and rumble with purrs, there’s no doubt that many of us would do just about anything for our pets. However, the cost of annual check-ups, vaccinations, and unexpected emergencies means those vet bills can pile up fast.

You may have considered pet insurance before and dismissed it due to the cost. However, if you own a Subaru, you can get discounted insurance through Liberty Mutual. The insurance company offers three types of coverage policies: accident, accident and illness, and accident, illness, and wellness. The first includes protection against accidental injury, including ingestion of foreign objects. Illness coverage helps if your pet gets sick and includes alternative medicine, behavioral therapy, and treatment for hereditary and congenital conditions, as well as everything covered under the previous tier. The most comprehensive policy also adds wellness coverage and includes dental cleanings, prescriptions, vaccinations, and more. You can visit Liberty Mutual’s Subaru Pet Insurance site to start your personalized quote and see how much you can save. Be sure to compare to competitors to ensure you’re getting the best deal.

Advertisement

Badge of ownership

A Subaru badge of ownership may not be as exciting as discounts or freebies, but it’s a fun way to show your enthusiasm and interests whether you’re a first-time Subaru owner or a long-time fan of the Japanese automaker. To build your own custom badge, you can visit the Subaru gear website and enter your VIN number, model, and model year. Then, your custom badge is free.

The badge first displays how many Subaru vehicles you’ve owned. Then, you can select from a dozens of additional icons that best represent your lifestyle, interests, or passions. There are those that represent breast cancer awareness, camping, teaching, recycling, military service, a love of pets, cycling, and more. Additionally, at time of writing there are select Premium badges that are available for a small fee of $5. These include those representing pickleball, a musical treble clef, a rainbow peace sign, and more.

Advertisement

The badges aim to connect a community of Subaru owners and also allow each driver to express their individuality, all while celebrating their commitment to the brand.

Advertisement

Trade-in and Trade Up programs

Once you’re ready to move on, there are several things you can do to prepare your vehicle to trade it in for a new model, but Subaru wants to help — provided that you’re already a Subaru owner, that is. The company’s Guaranteed Trade-In Program is designed to give owners a leg up by maximizing trade-in value for their vehicle. Owners simply enter their VIN number and their vehicle’s current mileage to get a trade-in quote, though you should note that as of April 2026, the value of your trade-in is based on maximum allowable vehicle mileage at the time of sale. For example, if you own a 2018 model, you cannot have more than 100,000 miles on the car. If your vehicle exceeds that, the trade-in value is reduced by $0.20 for each mile over the limit.

Active leased and commercial vehicles do not qualify for the trade-in program. Additionally, if you live in Hawaii you are not eligible, though all of the other 49 states qualify. If your vehicle was repaired following a collision that required panel or parts replacement, it is also disqualified from the program. Additional criteria and exclusions apply.

The automaker also offers a Trade Up Program, which is intended to help you upgrade to a vehicle with more advanced features and technology. Eligible owners receive a personalized offer tailored for their situation that they can then customize to ensure their new vehicle has everything they want. Subaru also touts the new warranty and lower maintenance costs on upgraded vehicles. To learn more, you can contact a participating retailer.

Advertisement

Financial and other assistance programs

Buying a new car can be stressful, but Subaru offers discounts and other programs to help you afford that new ride. If you’re a teacher, you can take advantage of Subaru’s VIP Educator Program, which rewards active classroom teachers who work in pre-K through grade 12, giving them $500 off the purchase or lease of a new vehicle. A valid ID is required for this discount. Active duty military can also receive a $500 discount off the purchase or lease of a new Subaru. This program is also open to reserve members of the military, all military retirees, and veterans that are within 24 months of separating from the military. The VIP Educator Program and the Military Program cannot be combined with other VIP program offers, but they can be combined with other incentives.

Subaru doesn’t stop there — if you’re about to graduate from college or you recently graduated, you can take advantage of the company’s College Graduate Program for both leases and loans through Subaru Motors Finance. This program is intended to give individuals with limited credit history competitive rates and matches a down payment up to $500. Finally, if you use a mobility device or are a person with a mobility disability, Subaru wants to help you modify your vehicle so that it meets your specialized needs. The automaker’s Mobility Assist Program provides reimbursements up to $1,000 on new vehicles to help pay for those modifications. All Subaru models can be modified for a left-hand gear shifter, hand and foot controls, pedal extensions, and more.

Advertisement



Source link

Advertisement
Continue Reading

Tech

Video service Vimeo confirms Anodot breach exposed user data

Published

on

Vimeo

Vimeo has disclosed that data belonging to some of its customers and users has been accessed without authorization following the recent breach at the Anodot data anomaly detection company.

The video platform says that the threat actor accessed email addresses for some of its customers, but most of the exposed information included technical data, video titles, and metadata.

“We have identified that, as a result of the Anodot breach, an unauthorized actor accessed certain Vimeo user and customer data. Our initial findings suggest that the databases accessed primarily contain technical data, video titles and metadata, and, in some cases, customer email addresses,” Vimeo states.

image

The Vimeo breach was claimed by the infamous extortion group ShinyHunters, who threatened to publish the stolen data by April 30 unless the company paid a ransom.

Vimeo is a video hosting and streaming platform, one of the largest alternatives to YouTube, enabling over 300 million registered users to upload, host, and share high-quality videos.

Advertisement

The company employs over 1,100 people, has an annual revenue of $417 million, and is publicly traded on the Nasdaq stock market.

Yesterday, ShinyHunters listed Vimeo on their extortion portal, claiming to have data from the company’s Snowflake and BigQuery instances.

Apart from threatening to leak the data, the actor also issued a warning to the company, stating that the platform should expect “several annoying digital problems.”

Shiny
Source: BleepingComputer

The Anodot incident involved attackers stealing authentication tokens and using them to access customer environments, primarily Snowflake, and exfiltrate data from multiple organizations.

The activity has been linked to the ShinyHunters extortion group, which is now attempting to monetize the breach through extortion and by threatening to leak the stolen data from various downstream victims.

Advertisement

One of those victims was game development studio Rockstar Games, with ShinyHunters claiming to have exfiltrated more than 78.6 million records.

In the case of Vimeo, however, the impact remains unclear as the actor did not state the amount of stolen data.

Vimeo has specified that the exposed data does not include video content users uploaded on the platform, account credentials, or payment card information. Also, the platform’s operations remained unaffected.

The company has now disabled all Anodot credentials and removed the service’s integration with its systems.

Advertisement

Vimeo is now investigating the incident with the help of third-party security experts and has also notified law enforcement authorities.

The firm promised to provide updates if the investigation uncovers important new information about the incident.


article image

AI chained four zero-days into one exploit that bypassed both renderer and OS sandboxes. A wave of new exploits is coming.

At the Autonomous Validation Summit (May 12 & 14), see how autonomous, context-rich validation finds what’s exploitable, proves controls hold, and closes the remediation loop.

Claim Your Spot

Source link

Advertisement
Continue Reading

Tech

Sparse AI Hardware Slashes Energy and Latency

Published

on

When it comes to AI models, size matters.

Even though some artificial-intelligence experts warn that scaling up large language models (LLMs) is hitting diminishing performance returns, companies are still coming out with ever larger AI tools. Meta’s latest Llama release had a staggering 2 trillion parameters that define the model.

As models grow in size, their capabilities increase. But so do the energy demands and the time it takes to run the models, which increases their carbon footprint. To mitigate these issues, people have turned to smaller, less capable models and using lower-precision numbers whenever possible for the model parameters.

But there is another path that may retain a staggeringly large model’s high performance while reducing the time it takes to run an energy footprint. This approach involves befriending the zeros inside large AI models.

Advertisement

For many models, most of the parameters—the weights and activations—are actually zero, or so close to zero that they could be treated as such without losing accuracy. This quality is known as sparsity. Sparsity offers a significant opportunity for computational savings: Instead of wasting time and energy adding or multiplying zeros, these calculations could simply be skipped; rather than storing lots of zeros in memory, one need only store the nonzero parameters.

Unfortunately, today’s popular hardware, like multicore CPUs and GPUs, do not naturally take full advantage of sparsity. To fully leverage sparsity, researchers and engineers need to rethink and re-architect each piece of the design stack, including the hardware, low-level firmware, and application software.

In our research group at Stanford University, we have developed the first (to our knowledge) piece of hardware that’s capable of calculating all kinds of sparse and traditional workloads efficiently. The energy savings varied widely over the workloads, but on average our chip consumed one-seventieth the energy of a CPU, and performed the computation on average eight times as fast. To do this, we had to engineer the hardware, low-level firmware, and software from the ground up to take advantage of sparsity. We hope this is just the beginning of hardware and model development that will allow for more energy-efficient AI.

What is sparsity?

Neural networks, and the data that feeds into them, are represented as arrays of numbers. These arrays can be one-dimensional (vectors), two-dimensional (matrices), or more (tensors). A sparse vector, matrix, or tensor has mostly zero elements. The level of sparsity varies, but when zeroes make up more than 50 percent of any type of array, it can stand to benefit from sparsity-specific computational methods. In contrast, an object that is not sparse—that is, it has few zeros compared with the total number of elements—is called dense.

Advertisement

Sparsity can be naturally present, or it can be induced. For example, a social-network graph will be naturally sparse. Imagine a graph where each node (point) represents a person, and each edge (a line segment connecting the points) represents a friendship. Since most people are not friends with one another, a matrix representing all possible edges will be mostly zeros. Other popular applications of AI, such as other forms of graph learning and recommendation models, contain naturally occurring sparsity as well.

Diagram mapping a sparse matrix to a fibertree and compressed storage format

Normally, a four-by-four matrix takes up 16 spaces in memory, regardless of how many zero values there are. If the matrix is sparse, meaning a large fraction of the values are zero, the matrix is more effectively represented as a fibertree: a “fiber” of i coordinates representing rows that contain nonzero elements, connected to fibers of j coordinates representing columns with nonzero elements, finally connecting to the nonzero values themselves. To store a fibertree in computer memory, the “segments,” or endpoints, of each fiber are saved alongside the coordinates and the values.

Beyond naturally occurring sparsity, sparsity can also be induced within an AI model in several ways. Two years ago, a team at Cerebras showed that one can set up to 70 to 80 percent of parameters in an LLM to zero without losing any accuracy. Cerebras demonstrated these results specifically on Meta’s open-source Llama 7B model, but the ideas extend to other LLM models like ChatGPT and Claude.

The case for sparsity

Sparse computation’s efficiency stems from two fundamental properties: the ability to compress away zeros and the convenient mathematical properties of zeros. Both the algorithms used in sparse computation and the hardware dedicated to them leverage these two basic ideas.

Advertisement

First, sparse data can be compressed, making it more memory efficient to store “sparsely”—that is, in something called a sparse data type. Compression also makes it more energy efficient to move data when dealing with large amounts of it. This is best understood by an example. Take a four-by-four matrix with three nonzero elements. Traditionally, this matrix would be stored in memory as is, taking up 16 spaces. This matrix can also be compressed into a sparse data type, getting rid of the zeros and saving only the nonzero elements. In our example, this results in 13 memory spaces as opposed to 16 for the dense, uncompressed version. These savings in memory increase with increased sparsity and matrix size.

Diagram comparing dense and sparse matrix\u2013vector multiplication step by step.

Multiplying a vector by a matrix traditionally takes 16 multiplication steps and 16 addition steps. With a sparse number format, the computational cost depends on the number of overlapping nonzero values in the problem. Here, the whole computation is accomplished in three lookup steps and two multiplication steps.

In addition to the actual data values, compressed data also requires metadata. The row and column locations of the nonzero elements also must be stored. This is usually thought of as a “fibertree”: The row labels containing nonzero elements are listed and linked to the column labels of the nonzero elements, which are then linked to the values stored in those elements.

In memory, things get a bit more complicated still: The row and column labels for each nonzero value must be stored as well as the “segments” that indicate how many such labels to expect, so the metadata and data can be clearly delineated from one another.

Advertisement

In a dense, noncompressed matrix data type, values can be accessed either one at a time or in parallel, and their locations can be calculated directly with a simple equation. However, accessing values in sparse, compressed data requires looking up the coordinates of the row index and using that information to “indirectly” look up the coordinates of the column index before finally reaching the value. Depending on the actual locations of the sparse data values, these indirect lookups can be extremely random, making the computation data-dependent and requiring the allocation of memory lookups on the fly.

Second, two mathematical properties of zero let software and hardware skip a lot of computation. Multiplying any number by zero will result in a zero, so there’s no need to actually do the multiplication. Adding zero to any number will always return that number, so there’s no need to do the addition either.

In matrix-vector multiplication, one of the most common operations in AI workloads, all computations except those involving two nonzero elements can simply be skipped. Take, for example, the four-by-four matrix from the previous example and a vector of four numbers. In dense computation, each element of the vector must be multiplied by the corresponding element in each row and then added together to compute the final vector. In this case, that would take 16 multiplication operations and 16 additions (or four accumulations).

In sparse computation, only the nonzero elements of the vector need be considered. For each nonzero vector element, indirect lookup can be used to find any corresponding nonzero matrix element, and only those need to be multiplied and added. In the example shown here, only two multiplication steps will be performed, instead of 16.

Advertisement

The trouble with GPUs and CPUs

Unfortunately, modern hardware is not well suited to accelerating sparse computation. For example, say we want to perform a matrix-vector multiplication. In the simplest case, in a single CPU core, each element in the vector would be multiplied sequentially and then written to memory. This is slow, because we can do only one multiplication at a time. So instead people use CPUs with vector support or GPUs. With this hardware, all elements would be multiplied in parallel, greatly speeding up the application. Now, imagine that both the matrix and vector contain extremely sparse data. The vectorized CPU and GPU would spend most of their efforts multiplying by zero, performing completely ineffectual computations.

Newer generations of GPUs are capable of taking some advantage of sparsity in their hardware, but only a particular kind, called structured sparsity. Structured sparsity assumes that two out of every four adjacent parameters are zero. However, some models benefit more from unstructured sparsity—the ability for any parameter (weight or activation) to be zero and compressed away, regardless of where it is and what it is adjacent to. GPUs can run unstructured sparse computation in software, for example, through the use of the cuSparse GPU library. However, the support for sparse computations is often limited, and the GPU hardware gets underutilized, wasting energy-intensive computations on overhead.

Neon pixel art of a glowing portal framed by geometric stairs and circuitry lines Petra Péterffy

When doing sparse computations in software, modern CPUs may be a better alternative to GPU computation, because they are designed to be more flexible. Yet, sparse computations on the CPU are often bottlenecked by the indirect lookups used to find nonzero data. CPUs are designed to “prefetch” data based on what they expect they’ll need from memory, but for randomly sparse data, that process often fails to pull in the right stuff from memory. When that happens, the CPU must waste cycles calling for the right data.

Apple was the first to speed up these indirect lookups by supporting a method called an array-of-pointers access pattern in the prefetcher of their A14 and M1 chips. Although innovations in prefetching make Apple CPUs more competitive for sparse computation, CPU architectures still have fundamental overheads that a dedicated sparse computing architecture would not, because they need to handle general-purpose computation.

Advertisement

Other companies have been developing hardware that accelerates sparse machine learning as well. These include Cerebras’s Wafer Scale Engine and Meta’s Training and Inference Accelerator (MTIA). The Wafer Scale Engine, and its corresponding sparse programming framework, have shown incredibly sparse results of up to 70 percent sparsity on LLMs. However, the company’s hardware and software solutions support only weight sparsity, not activation sparsity, which is important for many applications. The second version of the MTIA claims a sevenfold sparse compute performance boost over the MTIA v1. However, the only publicly available information regarding sparsity support in the MTIA v2 is for matrix multiplication, not for vectors or tensors.

Although matrix multiplications take up the majority of computation time in most modern ML models, it’s important to have sparsity support for other parts of the process. To avoid switching back and forth between sparse and dense data types, all of the operations should be sparse.

Onyx

Instead of these halfway solutions, our team at Stanford has developed a hardware accelerator, Onyx, that can take advantage of sparsity from the ground up, whether it’s structured or unstructured. Onyx is the first programmable accelerator to support both sparse and dense computation; it’s capable of accelerating key operations in both domains.

To understand Onyx, it is useful to know what a coarse-grained reconfigurable array (CGRA) is and how it compares with more familiar hardware, like CPUs and field-programmable gate arrays (FPGAs).

Advertisement

CPUs, CGRAs, and FPGAs represent a trade-off between efficiency and flexibility. Each individual logic unit of a CPU is designed for a specific function that it performs efficiently. On the other hand, since each individual bit of an FPGA is configurable, these arrays are extremely flexible, but very inefficient. The goal of CGRAs is to achieve the flexibility of FPGAs with the efficiency of CPUs.

CGRAs are composed of efficient and configurable units, typically memory and compute, that are specialized for a particular application domain. This is the key benefit of this type of array: Programmers can reconfigure the internals of a CGRA at a high level, making it more efficient than an FPGA but more flexible than a CPU.

Two circuit boards and a pen showing a chip shrinking from large to tiny size. The Onyx chip, built on a coarse-grained reconfigurable array (CGRA), is the first (to our knowledge) to support both sparse and dense computations. Olivia Hsu

Onyx is composed of flexible, programmable processing element (PE) tiles and memory (MEM) tiles. The memory tiles store compressed matrices and other data formats. The processing element tiles operate on compressed matrices, eliminating all unnecessary and ineffectual computation.

The Onyx compiler handles conversion from software instructions to CGRA configuration. First, the input expression—for instance, a sparse vector multiplication—is translated into a graph of abstract memory and compute nodes. In this example, there are memories for the input vectors and output vectors, a compute node for finding the intersection between nonzero elements, and a compute node for the multiplication. The compiler figures out how to map the abstract memory and compute nodes onto MEMs and PEs on the CGRA, and then how to route them together so that they can transfer data between them. Finally, the compiler produces the instruction set needed to configure the CGRA for the desired purpose.

Advertisement

Since Onyx is programmable, engineers can map many different operations, such as vector-vector element multiplication, or the key tasks in AI, like matrix-vector or matrix-matrix multiplication, onto the accelerator.

We evaluated the efficiency gains of our hardware by looking at the product of energy used and the time it took to compute, called the energy-delay product (EDP). This metric captures the trade-off of speed and energy. Minimizing just energy would lead to very slow devices, and minimizing speed would lead to high-area, high-power devices.

Onyx achieves up to 565 times as much energy-delay product over CPUs (we used a 12-core Intel Xeon CPU) that utilize dedicated sparse libraries. Onyx can also be configured to accelerate regular, dense applications, similar to the way a GPU or TPU would. If the computation is sparse, Onyx is configured to use sparse primitives, and if the computation is dense, Onyx is reconfigured to take advantage of parallelism, similar to how GPUs function. This architecture is a step toward a single system that can accelerate both sparse and dense computations on the same silicon.

Just as important, Onyx enables new algorithmic thinking. Sparse acceleration hardware will not only make AI more performance- and energy efficient but also enable researchers and engineers to explore new algorithms that have the potential to dramatically improve AI.

Advertisement

The future with sparsity

Our team is already working on next-generation chips built off of Onyx. Beyond matrix multiplication operations, machine learning models perform other types of math, like nonlinear layers, normalization, the softmax function, and more. We are adding support for the full range of computations on our next-gen accelerator and within the compiler. Since sparse machine learning models may have both sparse and dense layers, we are also working on integrating the dense and sparse accelerator architecture more efficiently on the chip, allowing for fast transformation between the different data types. We’re also looking at ways to manage memory constraints by breaking up the sparse data more effectively so we can run computations on several sparse accelerator chips.

We are also working on systems that can predict the performance of accelerators such as ours, which will help in designing better hardware for sparse AI. Longer term, we’re interested in seeing whether high degrees of sparsity throughout AI computation will catch on with more model types, and whether sparse accelerators become adopted at a larger scale.

Building the hardware to unstructured sparsity and optimally take advantage of zeros is just the beginning. With this hardware in hand, AI researchers and engineers will have the opportunity to explore new models and algorithms that leverage sparsity in novel and creative ways. We see this as a crucial research area for managing the ever-increasing runtime, costs, and environmental impact of AI.

From Your Site Articles

Advertisement

Related Articles Around the Web

Source link

Continue Reading

Tech

Samsung’s Leaked Galaxy Jinju Smartglasses Promise Everyday AI Without the Clutter

Published

on

Samsung Galaxy Glasses Smartglasses Jinju Leak
Photo credit: Android Headlines | OnLeaks
Leaked images purportedly show Samsung’s first pair of Galaxy smartglasses, codenamed Jinju, in stunning detail, and they appear to give the Ray-Ban Meta some heavy competition. However, they’re reportedly light on the face, weighing roughly 50 grams and looking so much like regular eyeglasses that only the inconspicuous camera bulge and tiny Samsung logo give them away.



Thin temples run along the sides, concealing the technology in a completely subtle manner; no big pieces or bulky extras here. Furthermore, the lenses can automatically adjust to light levels thanks to that sophisticated photochromic technology, and the frames take design inspirations from classic styles that Warby Parker and Gentle Monster have helped develop over time. If you catch a glance of someone wearing them, you’ll only see a pair of regular spectacles.


Ray-Ban Meta (Gen 2), Wayfarer, Matte Black | Smart AI Glasses for Men, Women — 2x Battery Life — 3K…
  • Tap into iconic style and advanced technology with Ray-Ban Meta, the #1 selling AI glasses*. Capture photos and videos, listen to music, make…
  • Chat with Meta AI to get suggestions, answers and reminders. With live translation, you can have a back-and-forth conversation in six languages and…
  • Listen to music and more with discreet open-ear speakers that deliver rich, quality audio without blocking out conversations or the ambient noises…

Samsung Galaxy Glasses Smartglasses Jinju Leak
When you look closer, you’ll realize it packs a Qualcomm Snapdragon AR1 processor paired with a 155 milliamp-hour battery, all discreetly hidden away in each temple, and the dual 12-megapixel Sony sensors smack bang in the middle, just waiting to take photos or feed visual data into the system. You’ll also receive good audio thanks to bone-conduction speakers, which deliver crisp music directly to your ears without spilling it out into the environment. Wi-Fi and Bluetooth 5.3 connectivity are available, which is convenient and uncomplicated.

Samsung Galaxy Glasses Smartglasses Jinju Leak
For the most part, interactions are handled by Gemini, which reacts to voice instructions with ease; simply give it a fast scream and it will bring up the weather, direct you to the next subway station, or translate any signage you see. The cameras also gather context, so the answers remain relevant to whatever you’re looking at in the actual world. The nicest feature is that there are no floating overlays or screens to distract from the overall simplicity of the experience.

Samsung Galaxy Glasses Smartglasses Jinju Specs
Samsung built these glasses around their new Android XR platform, which also powers their Galaxy XR headset. Pairing is simple with your Galaxy phone, watch, or other existing gear, and you can access all of your familiar apps without learning anything new. Early code for One UI 9 has already mentioned these frames under the model IDs SM-O200P and SM-O200J, indicating a rather deep relationship to the larger Samsung ecosystem.

In terms of price, we’re looking at between $379 and $499 depending on the final configuration, which places them somewhere between basic frames and high-end products from other businesses. Then there’s the Haean, a nicer second model that will debut in 2027 with a micro-LED display and a price tag of between $600 and $900. If you want all of the visual information displayed directly in front of your eyes, this is the one for you. Samsung plans to reveal the Jinju glasses later this year, likely at their summer Unpacked event, alongside some new foldables.
[Source]

Advertisement

Source link

Continue Reading

Tech

‘The House of the Spirits’: When to Watch the New TV Adaptation on Prime Video

Published

on

Blending history with the supernatural, best-selling novel The House of the Spirits launched the literary career of Isabel Allende back in the 1980s and went on to become a staple in school curriculums around the world. 

The story explores themes of classism, politics and surrealism as it centers on a family’s multigenerational line of women: Clara, Blanca and Alba. It’s now set to get the big-budget TV treatment with a lavish eight-episode TV adaptation hitting Prime Video this week. 

Advertisement

What’s the plot for The House of the Spirits?

Set at the turn of the 20th century in a fictionalized version of Chile, the sprawling tale follows three generations of the Trueba family as they traverse societal, economic and personal struggles, including secret romances and violent social upheaval.

Leading the cast is Rebel Moon star Alfonso Herrera as strict patriarch Esteban Trueba. Nicole Wallace and Dolores Fonzi share the role of Esteban’s clairvoyantly gifted wife, Clara del Valle, through various times in her life.

It’s not the first time that the beloved book has received the on-screen treatment, with the novel receiving a movie adaptation back in 1993 with an all-star cast that included Meryl Streep, Winona Ryder and Jeremy Irons. However, this new Spanish-language version looks set to be a more authentic interpretation of the source material. 

How to watch The House of the Spirits

This eight-episode series is exclusive to Prime Video, and premieres with the opening installment and two other episodes on Wednesday, April 29. A new episode will be released each Wednesday on the streaming service. The final chapter is set to be released on June 3.

Advertisement

James Martin/CNET

Prime Video’s standard service comes with ad breaks for US viewers. If you want to go ad-free, there’s an additional $5 monthly fee. This option is available to both Amazon Prime subscribers and those with a standalone Prime Video membership. For more information about the streamer, check out our review.

Advertisement

Source link

Continue Reading

Tech

2026 Green Powered Challenge: Supercapacitor Enables High-Power IoT

Published

on

With all the battery technologies and modern low-current sleep modes in most microcontrollers, running a sensor and microcontroller combo off-grid and far away from any infrastructure is usually not too difficult a task. Often these sorts of systems can go years without maintenance or interaction. But for something that still has to be off-grid but needs to do some amount of work every now and then like actuating a solenoid or quickly turning a servo, these battery-based systems can quickly run out of juice. To solve that problem, [Nelectra] has come up with this high-power capacitor-based IoT system.

Although supercapacitors don’t tend to have the energy density of batteries, they’re perfectly capable of powering short tasks in off-grid situations like this. They’re also typically able to tolerate lower voltages, extreme temperatures, and shock better than most batteries as well. A small solar cell on the top of this device keeps it topped up, and when running in deep sleep mode can hold a charge for up to six days. In more real-world applications supporting sensors, relays, or other actuators, [Nelectra] has found that it can hold a charge for around three days. When a quick burst of power is needed, it can deliver 1.5 A at 9 V or 500 mA at 24 V.

[Nelectra]’s stated goal for this build is to bridge low-power energy harvesting and practical field actuation, enabling maintenance-free systems such as irrigation control and remote switching without batteries, going beyond simple sensor applications while not relying on always-on power from somewhere else. Something like this would work really well in applications like this automated farm, which has already provided some unique solutions to intermittent power and microcontroller applications that need very high reliability.

Advertisement

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025