Connect with us
DAPA Banner

Tech

Amazon slashes $200 off M5 Max 16-inch MacBook Pro with month-end deal

Published

on

Pick up Apple’s new 16-inch MacBook Pro with M5 Max for $3,699 thanks to a triple-digit price cut at Amazon.

Silver MacBook Pro laptop with logo closed on a desk, Midnight AirPods Max headphones in front, small potted plant left, camera gear and orange drives right, bright green NEW badge above.
Save $200 on Apple’s M5 Max MacBook Pro 16-inch.

Amazon’s month-end 2026 MacBook Pro sale is in effect, with triple-digit savings on multiple models. This M5 Max 16-inch configuration is enticing at $200 off, bringing the price down to $3,699.
Buy 16″ MacBook Pro M5 Max for $3,699
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Sparse AI Hardware Slashes Energy and Latency

Published

on

When it comes to AI models, size matters.

Even though some artificial-intelligence experts warn that scaling up large language models (LLMs) is hitting diminishing performance returns, companies are still coming out with ever larger AI tools. Meta’s latest Llama release had a staggering 2 trillion parameters that define the model.

As models grow in size, their capabilities increase. But so do the energy demands and the time it takes to run the models, which increases their carbon footprint. To mitigate these issues, people have turned to smaller, less capable models and using lower-precision numbers whenever possible for the model parameters.

But there is another path that may retain a staggeringly large model’s high performance while reducing the time it takes to run an energy footprint. This approach involves befriending the zeros inside large AI models.

Advertisement

For many models, most of the parameters—the weights and activations—are actually zero, or so close to zero that they could be treated as such without losing accuracy. This quality is known as sparsity. Sparsity offers a significant opportunity for computational savings: Instead of wasting time and energy adding or multiplying zeros, these calculations could simply be skipped; rather than storing lots of zeros in memory, one need only store the nonzero parameters.

Unfortunately, today’s popular hardware, like multicore CPUs and GPUs, do not naturally take full advantage of sparsity. To fully leverage sparsity, researchers and engineers need to rethink and re-architect each piece of the design stack, including the hardware, low-level firmware, and application software.

In our research group at Stanford University, we have developed the first (to our knowledge) piece of hardware that’s capable of calculating all kinds of sparse and traditional workloads efficiently. The energy savings varied widely over the workloads, but on average our chip consumed one-seventieth the energy of a CPU, and performed the computation on average eight times as fast. To do this, we had to engineer the hardware, low-level firmware, and software from the ground up to take advantage of sparsity. We hope this is just the beginning of hardware and model development that will allow for more energy-efficient AI.

What is sparsity?

Neural networks, and the data that feeds into them, are represented as arrays of numbers. These arrays can be one-dimensional (vectors), two-dimensional (matrices), or more (tensors). A sparse vector, matrix, or tensor has mostly zero elements. The level of sparsity varies, but when zeroes make up more than 50 percent of any type of array, it can stand to benefit from sparsity-specific computational methods. In contrast, an object that is not sparse—that is, it has few zeros compared with the total number of elements—is called dense.

Advertisement

Sparsity can be naturally present, or it can be induced. For example, a social-network graph will be naturally sparse. Imagine a graph where each node (point) represents a person, and each edge (a line segment connecting the points) represents a friendship. Since most people are not friends with one another, a matrix representing all possible edges will be mostly zeros. Other popular applications of AI, such as other forms of graph learning and recommendation models, contain naturally occurring sparsity as well.

Diagram mapping a sparse matrix to a fibertree and compressed storage format

Normally, a four-by-four matrix takes up 16 spaces in memory, regardless of how many zero values there are. If the matrix is sparse, meaning a large fraction of the values are zero, the matrix is more effectively represented as a fibertree: a “fiber” of i coordinates representing rows that contain nonzero elements, connected to fibers of j coordinates representing columns with nonzero elements, finally connecting to the nonzero values themselves. To store a fibertree in computer memory, the “segments,” or endpoints, of each fiber are saved alongside the coordinates and the values.

Beyond naturally occurring sparsity, sparsity can also be induced within an AI model in several ways. Two years ago, a team at Cerebras showed that one can set up to 70 to 80 percent of parameters in an LLM to zero without losing any accuracy. Cerebras demonstrated these results specifically on Meta’s open-source Llama 7B model, but the ideas extend to other LLM models like ChatGPT and Claude.

The case for sparsity

Sparse computation’s efficiency stems from two fundamental properties: the ability to compress away zeros and the convenient mathematical properties of zeros. Both the algorithms used in sparse computation and the hardware dedicated to them leverage these two basic ideas.

Advertisement

First, sparse data can be compressed, making it more memory efficient to store “sparsely”—that is, in something called a sparse data type. Compression also makes it more energy efficient to move data when dealing with large amounts of it. This is best understood by an example. Take a four-by-four matrix with three nonzero elements. Traditionally, this matrix would be stored in memory as is, taking up 16 spaces. This matrix can also be compressed into a sparse data type, getting rid of the zeros and saving only the nonzero elements. In our example, this results in 13 memory spaces as opposed to 16 for the dense, uncompressed version. These savings in memory increase with increased sparsity and matrix size.

Diagram comparing dense and sparse matrix\u2013vector multiplication step by step.

Multiplying a vector by a matrix traditionally takes 16 multiplication steps and 16 addition steps. With a sparse number format, the computational cost depends on the number of overlapping nonzero values in the problem. Here, the whole computation is accomplished in three lookup steps and two multiplication steps.

In addition to the actual data values, compressed data also requires metadata. The row and column locations of the nonzero elements also must be stored. This is usually thought of as a “fibertree”: The row labels containing nonzero elements are listed and linked to the column labels of the nonzero elements, which are then linked to the values stored in those elements.

In memory, things get a bit more complicated still: The row and column labels for each nonzero value must be stored as well as the “segments” that indicate how many such labels to expect, so the metadata and data can be clearly delineated from one another.

Advertisement

In a dense, noncompressed matrix data type, values can be accessed either one at a time or in parallel, and their locations can be calculated directly with a simple equation. However, accessing values in sparse, compressed data requires looking up the coordinates of the row index and using that information to “indirectly” look up the coordinates of the column index before finally reaching the value. Depending on the actual locations of the sparse data values, these indirect lookups can be extremely random, making the computation data-dependent and requiring the allocation of memory lookups on the fly.

Second, two mathematical properties of zero let software and hardware skip a lot of computation. Multiplying any number by zero will result in a zero, so there’s no need to actually do the multiplication. Adding zero to any number will always return that number, so there’s no need to do the addition either.

In matrix-vector multiplication, one of the most common operations in AI workloads, all computations except those involving two nonzero elements can simply be skipped. Take, for example, the four-by-four matrix from the previous example and a vector of four numbers. In dense computation, each element of the vector must be multiplied by the corresponding element in each row and then added together to compute the final vector. In this case, that would take 16 multiplication operations and 16 additions (or four accumulations).

In sparse computation, only the nonzero elements of the vector need be considered. For each nonzero vector element, indirect lookup can be used to find any corresponding nonzero matrix element, and only those need to be multiplied and added. In the example shown here, only two multiplication steps will be performed, instead of 16.

Advertisement

The trouble with GPUs and CPUs

Unfortunately, modern hardware is not well suited to accelerating sparse computation. For example, say we want to perform a matrix-vector multiplication. In the simplest case, in a single CPU core, each element in the vector would be multiplied sequentially and then written to memory. This is slow, because we can do only one multiplication at a time. So instead people use CPUs with vector support or GPUs. With this hardware, all elements would be multiplied in parallel, greatly speeding up the application. Now, imagine that both the matrix and vector contain extremely sparse data. The vectorized CPU and GPU would spend most of their efforts multiplying by zero, performing completely ineffectual computations.

Newer generations of GPUs are capable of taking some advantage of sparsity in their hardware, but only a particular kind, called structured sparsity. Structured sparsity assumes that two out of every four adjacent parameters are zero. However, some models benefit more from unstructured sparsity—the ability for any parameter (weight or activation) to be zero and compressed away, regardless of where it is and what it is adjacent to. GPUs can run unstructured sparse computation in software, for example, through the use of the cuSparse GPU library. However, the support for sparse computations is often limited, and the GPU hardware gets underutilized, wasting energy-intensive computations on overhead.

Neon pixel art of a glowing portal framed by geometric stairs and circuitry lines Petra Péterffy

When doing sparse computations in software, modern CPUs may be a better alternative to GPU computation, because they are designed to be more flexible. Yet, sparse computations on the CPU are often bottlenecked by the indirect lookups used to find nonzero data. CPUs are designed to “prefetch” data based on what they expect they’ll need from memory, but for randomly sparse data, that process often fails to pull in the right stuff from memory. When that happens, the CPU must waste cycles calling for the right data.

Apple was the first to speed up these indirect lookups by supporting a method called an array-of-pointers access pattern in the prefetcher of their A14 and M1 chips. Although innovations in prefetching make Apple CPUs more competitive for sparse computation, CPU architectures still have fundamental overheads that a dedicated sparse computing architecture would not, because they need to handle general-purpose computation.

Advertisement

Other companies have been developing hardware that accelerates sparse machine learning as well. These include Cerebras’s Wafer Scale Engine and Meta’s Training and Inference Accelerator (MTIA). The Wafer Scale Engine, and its corresponding sparse programming framework, have shown incredibly sparse results of up to 70 percent sparsity on LLMs. However, the company’s hardware and software solutions support only weight sparsity, not activation sparsity, which is important for many applications. The second version of the MTIA claims a sevenfold sparse compute performance boost over the MTIA v1. However, the only publicly available information regarding sparsity support in the MTIA v2 is for matrix multiplication, not for vectors or tensors.

Although matrix multiplications take up the majority of computation time in most modern ML models, it’s important to have sparsity support for other parts of the process. To avoid switching back and forth between sparse and dense data types, all of the operations should be sparse.

Onyx

Instead of these halfway solutions, our team at Stanford has developed a hardware accelerator, Onyx, that can take advantage of sparsity from the ground up, whether it’s structured or unstructured. Onyx is the first programmable accelerator to support both sparse and dense computation; it’s capable of accelerating key operations in both domains.

To understand Onyx, it is useful to know what a coarse-grained reconfigurable array (CGRA) is and how it compares with more familiar hardware, like CPUs and field-programmable gate arrays (FPGAs).

Advertisement

CPUs, CGRAs, and FPGAs represent a trade-off between efficiency and flexibility. Each individual logic unit of a CPU is designed for a specific function that it performs efficiently. On the other hand, since each individual bit of an FPGA is configurable, these arrays are extremely flexible, but very inefficient. The goal of CGRAs is to achieve the flexibility of FPGAs with the efficiency of CPUs.

CGRAs are composed of efficient and configurable units, typically memory and compute, that are specialized for a particular application domain. This is the key benefit of this type of array: Programmers can reconfigure the internals of a CGRA at a high level, making it more efficient than an FPGA but more flexible than a CPU.

Two circuit boards and a pen showing a chip shrinking from large to tiny size. The Onyx chip, built on a coarse-grained reconfigurable array (CGRA), is the first (to our knowledge) to support both sparse and dense computations. Olivia Hsu

Onyx is composed of flexible, programmable processing element (PE) tiles and memory (MEM) tiles. The memory tiles store compressed matrices and other data formats. The processing element tiles operate on compressed matrices, eliminating all unnecessary and ineffectual computation.

The Onyx compiler handles conversion from software instructions to CGRA configuration. First, the input expression—for instance, a sparse vector multiplication—is translated into a graph of abstract memory and compute nodes. In this example, there are memories for the input vectors and output vectors, a compute node for finding the intersection between nonzero elements, and a compute node for the multiplication. The compiler figures out how to map the abstract memory and compute nodes onto MEMs and PEs on the CGRA, and then how to route them together so that they can transfer data between them. Finally, the compiler produces the instruction set needed to configure the CGRA for the desired purpose.

Advertisement

Since Onyx is programmable, engineers can map many different operations, such as vector-vector element multiplication, or the key tasks in AI, like matrix-vector or matrix-matrix multiplication, onto the accelerator.

We evaluated the efficiency gains of our hardware by looking at the product of energy used and the time it took to compute, called the energy-delay product (EDP). This metric captures the trade-off of speed and energy. Minimizing just energy would lead to very slow devices, and minimizing speed would lead to high-area, high-power devices.

Onyx achieves up to 565 times as much energy-delay product over CPUs (we used a 12-core Intel Xeon CPU) that utilize dedicated sparse libraries. Onyx can also be configured to accelerate regular, dense applications, similar to the way a GPU or TPU would. If the computation is sparse, Onyx is configured to use sparse primitives, and if the computation is dense, Onyx is reconfigured to take advantage of parallelism, similar to how GPUs function. This architecture is a step toward a single system that can accelerate both sparse and dense computations on the same silicon.

Just as important, Onyx enables new algorithmic thinking. Sparse acceleration hardware will not only make AI more performance- and energy efficient but also enable researchers and engineers to explore new algorithms that have the potential to dramatically improve AI.

Advertisement

The future with sparsity

Our team is already working on next-generation chips built off of Onyx. Beyond matrix multiplication operations, machine learning models perform other types of math, like nonlinear layers, normalization, the softmax function, and more. We are adding support for the full range of computations on our next-gen accelerator and within the compiler. Since sparse machine learning models may have both sparse and dense layers, we are also working on integrating the dense and sparse accelerator architecture more efficiently on the chip, allowing for fast transformation between the different data types. We’re also looking at ways to manage memory constraints by breaking up the sparse data more effectively so we can run computations on several sparse accelerator chips.

We are also working on systems that can predict the performance of accelerators such as ours, which will help in designing better hardware for sparse AI. Longer term, we’re interested in seeing whether high degrees of sparsity throughout AI computation will catch on with more model types, and whether sparse accelerators become adopted at a larger scale.

Building the hardware to unstructured sparsity and optimally take advantage of zeros is just the beginning. With this hardware in hand, AI researchers and engineers will have the opportunity to explore new models and algorithms that leverage sparsity in novel and creative ways. We see this as a crucial research area for managing the ever-increasing runtime, costs, and environmental impact of AI.

From Your Site Articles

Advertisement

Related Articles Around the Web

Source link

Continue Reading

Tech

Samsung’s Leaked Galaxy Jinju Smartglasses Promise Everyday AI Without the Clutter

Published

on

Samsung Galaxy Glasses Smartglasses Jinju Leak
Photo credit: Android Headlines | OnLeaks
Leaked images purportedly show Samsung’s first pair of Galaxy smartglasses, codenamed Jinju, in stunning detail, and they appear to give the Ray-Ban Meta some heavy competition. However, they’re reportedly light on the face, weighing roughly 50 grams and looking so much like regular eyeglasses that only the inconspicuous camera bulge and tiny Samsung logo give them away.



Thin temples run along the sides, concealing the technology in a completely subtle manner; no big pieces or bulky extras here. Furthermore, the lenses can automatically adjust to light levels thanks to that sophisticated photochromic technology, and the frames take design inspirations from classic styles that Warby Parker and Gentle Monster have helped develop over time. If you catch a glance of someone wearing them, you’ll only see a pair of regular spectacles.


Ray-Ban Meta (Gen 2), Wayfarer, Matte Black | Smart AI Glasses for Men, Women — 2x Battery Life — 3K…
  • Tap into iconic style and advanced technology with Ray-Ban Meta, the #1 selling AI glasses*. Capture photos and videos, listen to music, make…
  • Chat with Meta AI to get suggestions, answers and reminders. With live translation, you can have a back-and-forth conversation in six languages and…
  • Listen to music and more with discreet open-ear speakers that deliver rich, quality audio without blocking out conversations or the ambient noises…

Samsung Galaxy Glasses Smartglasses Jinju Leak
When you look closer, you’ll realize it packs a Qualcomm Snapdragon AR1 processor paired with a 155 milliamp-hour battery, all discreetly hidden away in each temple, and the dual 12-megapixel Sony sensors smack bang in the middle, just waiting to take photos or feed visual data into the system. You’ll also receive good audio thanks to bone-conduction speakers, which deliver crisp music directly to your ears without spilling it out into the environment. Wi-Fi and Bluetooth 5.3 connectivity are available, which is convenient and uncomplicated.

Samsung Galaxy Glasses Smartglasses Jinju Leak
For the most part, interactions are handled by Gemini, which reacts to voice instructions with ease; simply give it a fast scream and it will bring up the weather, direct you to the next subway station, or translate any signage you see. The cameras also gather context, so the answers remain relevant to whatever you’re looking at in the actual world. The nicest feature is that there are no floating overlays or screens to distract from the overall simplicity of the experience.

Samsung Galaxy Glasses Smartglasses Jinju Specs
Samsung built these glasses around their new Android XR platform, which also powers their Galaxy XR headset. Pairing is simple with your Galaxy phone, watch, or other existing gear, and you can access all of your familiar apps without learning anything new. Early code for One UI 9 has already mentioned these frames under the model IDs SM-O200P and SM-O200J, indicating a rather deep relationship to the larger Samsung ecosystem.

In terms of price, we’re looking at between $379 and $499 depending on the final configuration, which places them somewhere between basic frames and high-end products from other businesses. Then there’s the Haean, a nicer second model that will debut in 2027 with a micro-LED display and a price tag of between $600 and $900. If you want all of the visual information displayed directly in front of your eyes, this is the one for you. Samsung plans to reveal the Jinju glasses later this year, likely at their summer Unpacked event, alongside some new foldables.
[Source]

Advertisement

Source link

Continue Reading

Tech

‘The House of the Spirits’: When to Watch the New TV Adaptation on Prime Video

Published

on

Blending history with the supernatural, best-selling novel The House of the Spirits launched the literary career of Isabel Allende back in the 1980s and went on to become a staple in school curriculums around the world. 

The story explores themes of classism, politics and surrealism as it centers on a family’s multigenerational line of women: Clara, Blanca and Alba. It’s now set to get the big-budget TV treatment with a lavish eight-episode TV adaptation hitting Prime Video this week. 

Advertisement

What’s the plot for The House of the Spirits?

Set at the turn of the 20th century in a fictionalized version of Chile, the sprawling tale follows three generations of the Trueba family as they traverse societal, economic and personal struggles, including secret romances and violent social upheaval.

Leading the cast is Rebel Moon star Alfonso Herrera as strict patriarch Esteban Trueba. Nicole Wallace and Dolores Fonzi share the role of Esteban’s clairvoyantly gifted wife, Clara del Valle, through various times in her life.

It’s not the first time that the beloved book has received the on-screen treatment, with the novel receiving a movie adaptation back in 1993 with an all-star cast that included Meryl Streep, Winona Ryder and Jeremy Irons. However, this new Spanish-language version looks set to be a more authentic interpretation of the source material. 

How to watch The House of the Spirits

This eight-episode series is exclusive to Prime Video, and premieres with the opening installment and two other episodes on Wednesday, April 29. A new episode will be released each Wednesday on the streaming service. The final chapter is set to be released on June 3.

Advertisement

James Martin/CNET

Prime Video’s standard service comes with ad breaks for US viewers. If you want to go ad-free, there’s an additional $5 monthly fee. This option is available to both Amazon Prime subscribers and those with a standalone Prime Video membership. For more information about the streamer, check out our review.

Advertisement

Source link

Continue Reading

Tech

2026 Green Powered Challenge: Supercapacitor Enables High-Power IoT

Published

on

With all the battery technologies and modern low-current sleep modes in most microcontrollers, running a sensor and microcontroller combo off-grid and far away from any infrastructure is usually not too difficult a task. Often these sorts of systems can go years without maintenance or interaction. But for something that still has to be off-grid but needs to do some amount of work every now and then like actuating a solenoid or quickly turning a servo, these battery-based systems can quickly run out of juice. To solve that problem, [Nelectra] has come up with this high-power capacitor-based IoT system.

Although supercapacitors don’t tend to have the energy density of batteries, they’re perfectly capable of powering short tasks in off-grid situations like this. They’re also typically able to tolerate lower voltages, extreme temperatures, and shock better than most batteries as well. A small solar cell on the top of this device keeps it topped up, and when running in deep sleep mode can hold a charge for up to six days. In more real-world applications supporting sensors, relays, or other actuators, [Nelectra] has found that it can hold a charge for around three days. When a quick burst of power is needed, it can deliver 1.5 A at 9 V or 500 mA at 24 V.

[Nelectra]’s stated goal for this build is to bridge low-power energy harvesting and practical field actuation, enabling maintenance-free systems such as irrigation control and remote switching without batteries, going beyond simple sensor applications while not relying on always-on power from somewhere else. Something like this would work really well in applications like this automated farm, which has already provided some unique solutions to intermittent power and microcontroller applications that need very high reliability.

Advertisement

Source link

Advertisement
Continue Reading

Tech

These are the best looks at the DJI Osmo Pocket 4P yet

Published

on

DJI’s first Pro-tier Osmo Pocket camera has surfaced in dozens of leaked images ahead of its official launch announcement, suggesting the company is already briefing influencers and press as part of a pre-release campaign for the Osmo Pocket 4P.

The images, reported by leaker Igor Bogdanov on X, show the Osmo Pocket 4P being tested alongside both the recently launched Osmo Pocket 4 and the older Osmo Pocket 3, offering a direct size and design comparison between all three generations of the compact vlogging camera line.

That side-by-side framing points to a marketing strategy built around showcasing the 4P as a step above the standard Osmo Pocket 4, which DJI released across most major markets last week as a direct replacement for the Osmo Pocket 3 at a retail price of $499.

The 4P is understood to separate itself from that standard model through a 1-inch primary sensor and a telephoto lens with 3x optical zoom, a hardware combination that would place it closer to DJI’s action camera range in terms of imaging capability than a typical compact vlogging device.

Advertisement

DJI has so far only officially acknowledged the Osmo Pocket 4P in China, though reports suggest the camera could reach the US market under a different brand name, a route the company has previously used to navigate regulatory restrictions on its products in that territory.

The Osmo Pocket 4P launch follows an unusually active stretch for DJI, with the Mic Mini 2, Osmo Mobile 8P, and two new Lito drones all arriving within the same period as the standard Osmo Pocket 4, reflecting a product pipeline the company appears to be clearing ahead of a broader summer cycle.

Advertisement

DJI has not confirmed pricing or a release date for the Osmo Pocket 4P in any market, though the maturity of the influencer testing phase, as seen in the leaked images, suggests an official announcement is unlikely to be far off.

Source link

Advertisement
Continue Reading

Tech

Researchers develop battery-free 3D-printed metal tags for smart tracking that use ultrasonic sound to record everyday actions

Published

on


  • Battery-free metal tags generate ultrasonic signals when objects move nearby
  • Different disk shapes create unique sound signatures that identify tracked actions
  • Simulation tools produced hundreds of tag designs for varied real-world tracking uses

Researchers at Georgia Tech have built tiny metal tags that record everyday actions without needing batteries, charging cables, or wired power – instead relying on simple motion and sound rather than electronics inside each tag.

Most smart home sensors rely on batteries or wall power, which requires maintenance over time. These tags work differently, using mechanical contact to generate a brief ultrasonic signal whenever something moves.

Source link

Continue Reading

Tech

Hackaday Europe: Last Round Of Speakers, Workshops

Published

on

If you don’t already have your tickets to Hackaday Europe, pick them up now. The clock is ticking! Today, we’d like to announce our keynote speaker, the remainder of our featured talks, and two more workshops. (And if you want workshop tickets, which always go fast, get those soon!)

Hackaday Europe is super excited to welcome back Hackaday Superfriend [Sprite_tm] to kick off the event with a keynote talk on how he made a retrogaming PC from bare silicon. Don’t miss it.

Jeroen Domburg

Advertisement

Building a retro-PC…From Components

What if you could build a retro-gaming PC from bare chips? No emulation.  No ancient hardware. Jeroen walks through designing a compact 486 SBC with modern amenities, starting from the silicon up.

 

Edwin Hwu
PlayStation 4 to Psychometer: Skin Nanotexture Biometrics

Advertisement

Turn a PlayStation 4 optical pickup into a high-speed dermal atomic force microscope. Edwin shows how hardware hacking and deep learning combine to assess skin conditions and potentially detect stress non-invasively.

Erin Kennedy
Outdoors with Robots: Adventures and Lessons Learned

Ten years of taking robots into the real outdoors, through sand, mud, and wildfire zones. Erin shares what happens when nature-inspired machines meet nature itself, and what she’s learned building them.

Stephen Coyle
Making physically intuitive electronic instruments

Advertisement

Our physical intuitions about inertia, momentum, and gravity shape how we play instruments. Stephen explores what happens when digital instruments simulate these properties and what new musical possibilities emerge.

Sylvain Huet
Bare metal made easy

As tech grows more opaque, there’s an urgent need to return to simple, hackable systems. Sylvain presents an ambient computing vision; devices that blend into life rather than dominate it.

Alex Ren
Hack Club: How to get 2000 teenagers hacking their own hardware projects

Advertisement

A 3D printer made of Lego. DOOM running in a PDF. These are Hack Club projects built by teenagers. Alex shares the tools, culture, and community behind hardware hacking at scale for young makers.

Michael Wiebusch
Build a Cable Modem for your Arduino. For 2 Euros. But it’s not a Modem.

Electric signals travel in two directions in a coaxial cable, and they don’t mix on the way. Michael explains transmission line theory and demonstrates why it matters for RF and high-speed digital design.

Anders Nielsen
High Performance SDR on the cheap

Advertisement

RF, high-speed USB, analog chaos. Building a 20MHz continuous bandwidth, 3GHz-capable SDR without breaking a $50 BOM, achievable with a single FPGA on a carrier board.

Federico Terraneo
Fluid kernels and how to optimize C++ for microcontrollers

A 20-minute tour of the fluid kernel architecture, the Miosix RTOS as a practical implementation, and 18 years of hard-won tips for writing efficient C++ on microcontrollers.

Benjaminas Sulcas
Fault injection 101

Advertisement

A hands-on workshop covering the basics of hardware fault injection, power glitching, EMFI, and practical comparisons of tools available to hardware security researchers and curious makers.

Davide Gomba
Let’s Mesh!

A practical dive into mesh networking with Meshtastic and Reticulum; installing, configuring, and communicating across decentralized mesh programs. Leave with hands-on experience and a new view of off-grid connectivity.

If you’re joining us and you’re not on the list above, you can still take the stage!  We’ll have time for seven-minute Lightning Talks, hopefully enough for everyone. So bring your hack and bring a story. We want to hear it.

Advertisement

[If you read this far, you probably want tickets. Just sayin’.]

Source link

Advertisement
Continue Reading

Tech

Champions League Soccer: Stream PSG vs. Bayern Munich Live

Published

on

73% off with 2yr plan (+4 free months). Now only $3.49/month

ExpressVPN is our current best VPN pick for people who want a reliable and safe VPN, and it works on a variety of devices. It’s normally $120 a year for its most popular plan (Advanced), but if you sign up for an annual subscription for $90, you’ll get three months free. That’s the equivalent of $6 a month.

Advertisement


Jump to details


Pros

  • Cutting edge privacy and security
  • Excellent for streaming
  • Easy to use across platforms
  • Strong commitment to transparency
  • Privacy-friendly jurisdiction (British Virgin Islands)


Cons

  • Exceedingly expensive
  • No way to opt-out of potentially unneeded extra features
  • Speed performance getting progressively worse
  • Only eight simultaneous connections

When to watch PSG vs. Bayern Munich

  • Tuesday at 3 p.m. ET (12 p.m. PT).

Where to watch

  • PSG vs. Bayern Munich will air in the US on Paramount Plus.

Current UEFA Champions League holder Paris Saint-Germain hosts newly crowned Bundesliga champion Bayern Munich on Tuesday in the first leg of this eagerly anticipated UCL semifinal.

The Parisians booked their place in the last four of this tournament with a comfortable 4-0 aggregate win over Liverpool in the quarterfinals. They come into this clash after a confident 3-0 victory over Angers during the weekend, keeping them at the top of France’s Ligue 1.

Advertisement

Bayern, meanwhile, overcame Spanish giant Real Madrid to claim a memorable 6-4 aggregate win in their quarterfinal. Vincent Kompany’s team secured a 35th Bundesliga title earlier this month, but nevertheless worked hard for Saturday’s win over Mainz, battling back from a three-goal deficit to prevail 4-3. 

Paris Saint-Germain takes on Bayern Munich at the Parc des Princes on Tuesday, April 28. Kickoff is set for 9 p.m. CEST local time in France, making it an 8 p.m. BST kickoff in the UK, a 3 p.m. ET or 12 p.m. PT start in the US, and a 5 a.m. AEST kickoff in Australia on Wednesday morning.

Harry Kane of FC Bayern München  Shilling, looking upwards.

Bayern Munich striker Harry Kane has scored 12 goals so far in this season’s Champions League. 

Advertisement

Vitalii Kliuiev/Getty Images

Livestream PSG vs. Bayern Munich in the US without cable

American soccer fans can stream every game of this season’s tournament via Paramount Plus, which has exclusive live English-language broadcast rights in the US for the UEFA Champions League. 

It includes a multiview option that lets you watch up to four matches simultaneously and choose your preferred in-game audio. 

Sarah Tew/CNET
Advertisement

Paramount Plus has two main subscription plans in the US: Essential for $9 a month and Premium for $14 a month. Both offer coverage of the Champions League.

The cheaper Essential option has ads for on-demand streaming, but it lacks live CBS feeds and the ability to download shows to watch offline later. Students may qualify for a 25% discount.

How to watch UEFA Champions League games with a VPN

If you’re traveling abroad and want to keep up with Premier League action while away from home, a VPN can help enhance your privacy and security when streaming.

It encrypts your traffic and prevents your internet service provider from throttling your speeds. Additionally, it can be helpful when connecting to public Wi-Fi networks while traveling, providing an extra layer of protection for your devices and logins. VPNs are legal in many countries, including the US and Canada, and can be used for legitimate purposes such as improving online privacy and security. 

Advertisement

However, some streaming services may have policies restricting VPN use to access region-specific content. If you’re considering a VPN for streaming, check the platform’s terms of service to ensure compliance. 

If you choose to use a VPN, follow the provider’s installation instructions to ensure you’re connected securely and in compliance with applicable laws and service agreements. Some streaming platforms may block access when a VPN is detected, so verifying if your streaming subscription allows VPN use is crucial.

James Martin/CNET

Price $78 for two yearsLatest Tests No DNS leaks detected, 18% speed loss in 2025 testsJurisdiction British Virgin IslandsNetwork 3,000 plus servers in 105 countries

Advertisement

ExpressVPN is our current best VPN pick for people who want a reliable and safe VPN, and it works on a variety of devices. It’s normally $120 a year for its most popular plan (Advanced), but if you sign up for an annual subscription for $90, you’ll get three months free. That’s the equivalent of $6 a month.

Note that ExpressVPN offers a 30-day money-back guarantee.

Advertisement

73% off with 2yr plan (+4 free months). Now only $3.49/month

Livestream PSG vs. Bayern Munich in the UK

While TNT Sports broadcasts the lion’s share of Champions League matches, Prime Video has first pick of Tuesday games. It will show one match per week live exclusively on the platform, with today’s semifinal the pick for this week. 

Advertisement

James Martin/CNET

Prime Video standalone subscriptions start at £9 a month or £95 per year in the UK and include access to the Prime Video library of shows such as The Boys, Reacher and Fallout. The service is also included with an Amazon Prime membership.

Livestream PSG vs. Bayern Munich in Canada

If you want to stream Champions League games live in Canada, you’ll need to subscribe to DAZN Canada. The service has exclusive broadcast rights to every match this season, including this one.

A DAZN subscription currently costs CA$35 a month or CA$250 a year and will also give you access to Europa League and EFL Championship soccer, Six Nations rugby and WTA tennis.

As well as dedicated apps for iOS and Android, there’s a wide range of support for set-top boxes and smart TVs.

Livestream PSG vs. Bayern Munich in Australia

Soccer fans Down Under can watch UCL games on streaming service Stan Sport, which once again has exclusive rights to show all Champions League matches live in Australia this season.

Advertisement

Stan

Stan Sport will set you back AU$20 a month (on top of a Stan subscription, which starts at AU$12). It’s also worth noting that the streaming service is currently offering a seven-day free trial.

A subscription will also give you access to Premier League and Europa League action, as well as international rugby and Formula E.

Advertisement

Source link

Continue Reading

Tech

Deel’s global startup competition is giving away up to S$19M & you’ve got until May 2 to apply

Published

on

[This is a sponsored article with Deel.]

If you have a great startup idea that is gaining traction and ready to scale up, here’s your sign to seize that opportunity. 

Deel’s The Pitch, a global startup competition with up to US$15 million (S$19.1 million) in total funding on the line, is closing applications for its APAC regional round soon. 

It is set to be held in Singapore on May 12, 2026, but don’t wait because applications close on May 2.

Advertisement

And if you’re thinking you don’t have time, you probably do. Applications can be completed in under five minutes, and the best part? It’s completely free. Yes, really.

A global stage for early-stage founders

The Pitch is an international tournament designed to find and fund the world’s most promising seed-stage startups, regardless of where they’re based.

The whole premise is simple: your idea should compete on merit, not on your geography, your network, or your ability to win over the right VCs.

The Berlin Regional Finals on Apr 20 concluded successfully with 12 winners./ Image Credit: Deel

To make that happen, Deel is hosting regional competitions across seven global hubs, including Singapore, New York, and Berlin, with over 20,000 startups expected to participate worldwide. 

The competition is backed by serious names too, presented by J.P. Morgan with partners including a16z, Google, Stripe, and Ribbit Ventures.

Advertisement

Here’s how it works: founders apply online, and shortlisted teams will pitch live at a regional final in their respective locations. From there, up to 100 startups will receive US$50,000 (S$64,000) in Simple Agreement for Future Equity (SAFE) investment each, and progress to the global stage on May 18-19, with the location yet to be announced.

Grand finale winners, up to 10 startups, will receive a US$1 million (S$1.2 million) SAFE investment each to scale their visions.

Beyond the prize money, all participants gain access to networking opportunities and exposure to some of the world’s top-tier investors—not a bad upside even if you don’t take the top spot.

Who can apply?

Image Credit: Deel

If the prize pool has your attention but the eligibility criteria has you hesitating, don’t fret. The bar to entry is intentionally wide. 

The Pitch is open to pre-seed, seed, and Series A startups from anywhere in the world, as long as they have full-time founders, a registered legal entity, and are building a scalable product or service.

Advertisement

That said, don’t mistake accessible for easy. 

Deel says the final selection rate will be just 0.05%, making it more selective than many of the world’s top accelerators. 

Applications are assessed on product strength, market opportunity, team capability, traction metrics, and scalability potential, with both AI analysis and human expert review to keep things fair.

Last call for S’pore & APAC founders

For startup founders, opportunities like this don’t come around often, particularly ones that are open globally and free to enter. 

Advertisement

Although regional finals are already underway globally, there’s still time to throw your hat in the ring for the Singapore leg. 

Check out The Pitch’s website here to apply—it’ll take you less than five minutes.

Featured Image Credit: Deel

Advertisement

Source link

Continue Reading

Tech

5 ‘Bad’ Cars We Still Can’t Help But Love

Published

on





There are some cars that people just love to hate for one reason or another. Whether it’s because the cars are objectively bad or look like hot garbage, we’ve collectively hated on certain vehicles since the dawn of the mass-produced automobile.

I myself am guilty of this. I’ve been a proud car enthusiast all my life, developing my taste since I was a toddler. And even after several decades, certain cars just make me wince when I see them, like I’m swallowing a particularly dry and troublesome pill in the morning. But that’s just one side of the coin; after so many years studying and working around cars, I’ve also grown fond of some cars that are often the butt of car fans’ jokes.

Advertisement

This doesn’t extend to all of them, of course: I’ll openly admit my hatred of massive pickup trucks, boring crossovers, and excessive minimalism. But there are many cars that fans generally consider “bad” that I genuinely find appealing — and for objective reasons, too. In this article, I’ll go over some cars that history’s slammed and why they’ve been done way dirtier than they deserve, sticking with the oddballs so I don’t regurgitate points about why the Aztek was ahead of its time. Some of these cars are commercial failures, radical designs, or so rough that they’re barely a step above prototypes, with plenty of reasons to call them “bad.” But that doesn’t mean that they can’t be appreciated, or even beloved, for the unique traits they bring to the table. Let’s dive in and air out the skeletons in my automotive closet.

Advertisement

Ford Mustang II

“No, it’s not a Pinto. Yes, I know it looks like a Pinto, but I swear it’s not a Pinto. See the giant decal on the long hood? Not a Pinto,” is surely a conversation that’s occurred at least once or twice. And it irritates me so much because the Mustang II is absolutely not a Ford Pinto. Okay, they share some of the subframe and powertrain options, but you have to put the car into the Oil Crisis context here.

For those uninformed, the 1973 Oil Crisis was devastating for the American automotive industry. It effectively gutted muscle cars, transforming automotive culture quite literally overnight. Gas restrictions hit big-block V8s hard, and American automakers had no answers; this led to a huge drop in sales and incentivized people to buy small Japanese imports instead. And just as this era hit, Ford introduced the Mustang II for model year 1974. It was lethargically slow, small, and had a four-cylinder engine as standard. It was also the reason the Mustang name survives today.

Think about it — the 1973 Mustang was a midsize, V8-powered, half-sports, half-luxury coupe. If it continued unchanged into the mid-1970s, its engine would’ve been choked to within an inch of its life. It would’ve been saddled with all the restrictions that nearly killed the American full-size coupe, and the name would’ve gone away with a whimper. The Mustang II’s formula was incredibly successful, carrying the brand kicking and screaming through the Malaise Era. People hate it because it was the slowest Mustang; I love it because there would be no more Mustang without it, period. Also, I have to admit the King Cobra’s decals look really good.

Advertisement

Second-generation Toyota Prius

Here’s another example of a commercially successful car that the enthusiast community hated on for the longest time, and I genuinely have no idea why. Okay, yes, the Prius is abysmally boring to look at and drive, and it’s about as far from “enthusiast” car as one can get. But it’s still absolutely something I would daily. Why? It’s not because it’s exciting — okay, Toyota did race one in Super GT for some reason, but that’s beside the point. It’s because of what cars are supposed to do.

What is a car, but a box on four wheels that gets you from point A to B? I’m looking at it from an enthusiast’s perspective, granted. But if I were buying, say, a refrigerator, I’d buy something that fits enough groceries and doesn’t break down constantly. That’s the way I see the Prius. It’s the automotive equivalent of a boring kitchen appliance, and there’s nothing wrong with that.

Advertisement

There were so many memes about the second-gen Prius back in the 2000s and 2010s, and I can see why. Priuses are slow, bland, and thoroughly uninteresting, all of which runs counter to my instinct as someone passionate about cars. But that’s hardly the point; they were designed to haul people and their goods frugally, and they are still incredibly good at that. Objectively, it’s one of the most practical and economical vehicles money can buy today. I can say, hand on heart, that I’d drive one regularly without complaint, and that’s coming from a woman who dailies an R34 Skyline.

Advertisement

Plymouth / Chrysler Prowler

I actually had a die-cast model of one of these growing up, and I distinctly remember the day the suspension fell apart, throwing a plastic control arm under the couch and into the void. I imagine that’s how some non-car people see this, what with its kit-car looks. And enthusiasts dislike it because it has the same V6 engine as a minivan, married to a 4-speed slushbox automatic. The Prowler had a wild image, but let’s be real: this is no hot rod.

That said, well — just look at it. It’s so captivatingly strange that I can’t help but love it. The Prowler rode the crest of the retro-futurism wave, punctuated by other famously abhorrent 2000s-era designs like the PT Cruiser and Dodge Nitro. It was billed as a factory hot rod, with a front end that looked like a car that had run into a pencil sharpener. Then you have the protruding front bumpers and wheel arches, further contributing to its bizarreness. And yet, I see a yellow one now and then on the highway, and I still stare at the thing.

Sure, I know the Prowler’s V6 is famously lethargic, and I’m aware that it’s wildly impractical for anything other than joyriding. Don’t get me wrong, I should hate it. But then I see the thing in-person and I’m like, “Oh yeah, that’s why I love it.” It’s the king of wacky ’90s excess; the Insane Clown Posse of cars. And that’s what makes it special.

Advertisement

Honda Ridgeline

This goes back to my initial criticism of big American pickup trucks, perhaps further colored by my upbringing in suburbia. How much truck does the average American actually need? Statistically, not that much, considering the majority of Americans tend not to use their trucks for truck things. They’ve evolved from being agricultural and utility vehicles to massive, rolling showcases of technology with front ends that look like rolling garage doors. But back in the day, we had the first-gen Ford Ranger, the Mazda Pickup, the Jeep Comanche — and we liked them.

Now, yes, all of those trucks have more utility than a Ridgeline; they have bigger beds, for one. But that’s not the point, since we still have trucks for when we need that. Let’s instead take a critical look at what trucks have become in the 2020s. I understand the hate for the Ridgeline because it is indeed a crossover with a pickup bed. But that’s genuinely what a lot of these owners use their trucks for, anyway. Basically, we’re jamming a square peg into a round hole by using a big pickup to run around and grab groceries when something like the Ridgeline would absolutely suffice.

Ridgelines get hate for lackluster utility compared to purpose-built trucks, but they’re not purpose-built trucks — they’re daily drivers with pickup beds. They won’t break the bank, fit in the average parking spot, and are comfortable and reliable vehicles. I think it’s the perfect compromise outside of a ute, offering enough comfort, capacity, and towing capability to satisfy the general non-commercial audience. Just don’t mind the weird location of the spare tire.

Advertisement

Vector W8

This car is incredibly difficult to describe in a single sentence, but here goes nothing. The Vector W8 was the brainchild of Gerald Wiegert, who built a $450k (in 1989) supercar with a transverse 625-hp V8 coupled to a 3-speed auto from the Oldsmobile Toronado. It’s easily one of the most 1980s cars ever. It was also something of a technological marvel, utilizing top-tier materials and components of the era, with an interior that intentionally resembled a fighter plane — well before modern hypercars hopped on that bandwagon. Its vaporwave instrumentation is easily one of the weirdest dashboards ever designed, and that’s a bold statement when the Dome Zero exists.

Of course, that didn’t stop it from being a bad car. For instance, Car and Driver tested three Vector W8s, and all three broke down in different ways. Those advanced materials? They significantly increased the cost, meaning you’d have paid the equivalent of a million dollars to get a car that only ran properly some of the time.

Advertisement

Nevertheless, I love it. I remember the first time I found out about the W8; I was a little girl playing “Gran Turismo 2” and came across a purple one in-game. I remember thinking it was a knockoff Diablo or something, but it was fast and looked utterly captivating. Then I saw one at a car show, and that was that. Of course, actually owning one of these things would utterly drain my bank account. I imagine it’s, frankly, an absolute albatross. But for those rare instances where it works, it’d be as special and rare as coming across an elusive snow leopard.



Advertisement

Source link

Continue Reading

Trending

Copyright © 2025