Connect with us

Tech

Exploring Innovative Number Formats for AI Efficiency

Published

on

AI has driven an explosion of new number formats—the ways in which numbers are represented digitally. Engineers are looking at every possible way to save computation time and energy, including shortening the number of bits used to represent data. But what works for AI doesn’t necessarily work for scientific computing, be it for computational physics, biology, fluid dynamics, or engineering simulations. IEEE Spectrum spoke with Laslo Hunhold, who recently joined Barcelona-based Openchip as an AI engineer, about his efforts to develop a bespoke number format for scientific computing.

LASLO HUNHOLD

Laslo Hunhold is a senior AI accelerator engineer at Barcelona-based startup Openchip. He recently completed a Ph.D. in computer science from the University of Cologne, in Germany.

What makes number formats interesting to you?

Advertisement

Laslo Hunhold: I don’t know another example of a field that so few are interested in but has such a high impact. If you make a number format that’s 10 percent more [energy] efficient, it can translate to all applications being 10 percent more efficient, and you can save a lot of energy.

Why are there so many new number formats?

Hunhold: For decades, computer users had it really easy. They could just buy new systems every few years, and they would have performance benefits for free. But this hasn’t been the case for the last 10 years. In computers, you have a certain number of bits used to represent a single number, and for years the default was 64 bits. And for AI, companies noticed that they don’t need 64 bits for each number. So they had a strong incentive to go down to 16, 8, or even 2 bits [to save energy]. The problem is, the dominating standard for representing numbers in 64 bits is not well designed for lower bit counts. So in the AI field, they came up with new formats which are more tailored toward AI.

Why does AI need different number formats than scientific computing?

Advertisement

Hunhold: Scientific computing needs high dynamic range: You need very large numbers, or very small numbers, and very high accuracy in both cases. The 64-bit standard has an excessive dynamic range, and it is many more bits than you need most of the time. It’s different with AI. The numbers usually follow a specific distribution, and you don’t need as much accuracy.

What makes a number format “good”?

Hunhold: You have infinite numbers but only finite bit representations. So you need to decide how you assign numbers. The most important part is to represent numbers that you’re actually going to use. Because if you represent a number that you don’t use, you’ve wasted a representation. The simplest thing to look at is the dynamic range. The next is distribution: How do you assign your bits to certain values? Do you have a uniform distribution, or something else? There are infinite possibilities.

What motivated you to introduce the takum number format?

Advertisement

Hunhold: Takums are based on posits. In posits, the numbers that get used more frequently can be represented with more density. But posits don’t work for scientific computing, and this is a huge issue. They have a high density for [numbers close to one], which is great for AI, but the density falls off sharply once you look at larger or smaller values. People have been proposing dozens of number formats in the last few years, but takums are the only number format that’s actually tailored for scientific computing. I found the dynamic range of values you use in scientific computations, if you look at all the fields, and designed takums such that when you take away bits, you don’t reduce that dynamic range

This article appears in the March 2026 print issue as “Laslo Hunhold.”

From Your Site Articles

Related Articles Around the Web

Advertisement

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

When identity isn’t the weak link, access still is

Published

on

Header image showing a laptop and mobile phones

For years, identity has been treated as the foundation of workforce security. If an organization could reliably confirm who a user was, the assumption followed that access could be granted with confidence.

That logic worked when employees accessed corporate networks from corporate devices under predictable conditions. Today, that no longer reflects how access is actually used or abused.

The modern workforce operates across multiple locations, networks, and time zones. Employees routinely switch between corporate laptops, personal devices, and third-party endpoints.

Access is no longer anchored to a single environment or device, yet security teams are expected to support this flexibility without increasing exposure or disrupting productivity, even as the signals used to make access decisions become noisier, more fragmented, and harder to trust on their own.

Advertisement

As a result, identity is being asked to carry responsibility it was never designed to hold alone. Authentication can confirm who a user claims to be, but it does not provide sufficient insight into how risky that access may be once device condition and context are taken into account. In modern environments, the core issue is not identity failure, but the over-reliance on identity as a proxy for trust.

Identity tells us who, not how risky the access is

A legitimate user accessing systems from a secure, compliant device represents a fundamentally different risk from the same user connecting from an outdated, unmanaged, or compromised endpoint. Yet many access models continue to treat these scenarios as equivalent, granting access primarily on identity while device condition remains secondary or static.

This approach fails to account for how quickly device risk changes after authentication. Endpoints regularly shift state as configurations drift, security controls are disabled, or updates are delayed, often long after access has already been granted.

When access decisions remain tied to the conditions present at login, trust persists even as the underlying risk profile degrades.

Advertisement

These gaps are most visible across access paths that fall outside modern conditional access coverage, including legacy protocols, remote access tools, and non-browser-based workflows. In these cases, access decisions are often made with limited context, and trust is extended beyond the point where it is justified.

Attackers are increasingly exploiting these blind spots by reusing misplaced trust rather than breaking authentication, stealing session tokens, abusing compromised endpoints, or working around multi-factor authentication.

After all, it’s easier to log in than break in. A valid identity presented from the wrong device remains one of the most reliable ways to bypass modern controls and fly under the radar.

Verizon’s Data Breach Investigation Report found stolen credentials are involved in 44.7% of breaches. 

 

Effortlessly secure Active Directory with compliant password policies, blocking 4+ billion compromised passwords, boosting security, and slashing support hassles!

Advertisement

Try it for free

Why Zero Trust often falls short

Zero Trust is widely accepted as a security principle, but far less consistently applied across workforce access. While identity controls have matured, progress frequently stalls at the device layer, particularly across access paths outside browser-based or modern conditional access frameworks that inherit trust by default.

Establishing device trust introduces complexity that identity alone cannot address. Unmanaged and personal devices are difficult to assess consistently, compliance checks are often static rather than continuous, and enforcement varies depending on how access is initiated.

These challenges are compounded when identity and endpoint signals are handled by separate tools that were never designed to work together. The result is fragmented visibility and inconsistent decisions.

Over time, access policies can harden and become static, creating more opportunities for identity abuse. When access is granted without ongoing checks, traditional controls are slow to detect and respond to malicious behavior.

Advertisement

From identity checks to continuous access verification

Addressing static, identity-centric access controls requires mechanisms that remain effective after authentication and adapt as conditions change.

Solutions such as Infinipoint operationalize this model by extending trust decisions beyond identity and maintaining enforcement as conditions evolve.

 Infinipoint extends trust decisions beyond identity with continuous device verification.
 Infinipoint extends trust decisions beyond identity with continuous device verification.

The following measures focus on closing the most common access failure points without disrupting how people work.

  • Verify both user and device continuously: This approach reduces the effectiveness of stolen credentials, session tokens, and multi-factor authentication bypass techniques by ensuring access is tied to a trusted endpoint rather than granted on identity alone.
  • Apply device-based access controls: Device-based access controls make it possible to enroll approved hardware, limit the number and type of devices per user, and differentiate between corporate, personal, and third-party endpoints. This prevents attackers from reusing valid credentials from untrusted devices.
  • Enforce security without defaulting to disruption: Proportionate enforcement allows organizations to respond to risk without unnecessarily interrupting legitimate work. This includes conditional restrictions and grace periods that give users time to resolve issues while maintaining security controls.
  • Enable self-service remediation to restore trust: Self-guided, one-click remediation for actions such as enabling encryption or updating operating systems allows trust to be restored efficiently, reducing support tickets and demand on IT teams while keeping security standards intact.
Infinipoint’s remediation toolbox gives users one-click steps to fix device compliance issues.
Infinipoint’s remediation toolbox gives users one-click steps to fix device compliance issues.

Specops, the Identity and Access Management division of Outpost24, delivers these controls through Infinipoint, enabling zero trust workforce access that verifies both users and devices at every access point and continuously throughout each session across Windows, macOS, Linux, and mobile platforms.

Talk to a Specops expert about enforcing device-based Zero Trust access beyond identity.

Sponsored and written by Specops Software.

Advertisement

Source link

Continue Reading

Tech

Half of US Adults Who Use Social Media Want Better Labels on AI Posts, CNET Finds

Published

on

Anyone who’s scrolled social media lately knows that AI is everywhere. But we aren’t always great at spotting it when we see it. That’s a big problem, and our frustrations with AI are growing.

AI slop has infected every platform, from soulless images to bizarre videos and superficially literate text. The vast majority of US adults who use social media (94%) believe they encounter content that was created or altered by AI, but only 44% of US adults say they’re confident they can tell real photos and videos from AI-generated ones, according to an exclusive CNET survey.

Read more: AI Slop Is Destroying the Internet. These People Are Fighting to Save It.

Advertisement

There are a lot of different ways people are fighting back against AI content. Some solutions are focused on better labels for AI-created content, since it’s harder than ever to trust our eyes. Of the 2,443 respondents who use social media, half (51%) believed we need better AI labels online. Others (21%) believe there should be a total ban on AI-generated content on social media. Only a small group (11%) of respondents say they find AI content useful, informative or entertaining.

AI isn’t going anywhere, and it’s fundamentally reshaping the internet and our relationship with it. Our survey shows that we still have a long way to go to reckon with it.

Key findings

  • Most US adults who use social media (94%) believe they encounter AI content on social media, yet far fewer (44%) can confidently distinguish between real and fake images and videos.
  • Many US adults (72%) said they take action to determine if an image or video is real, but some don’t do anything, particularly among Boomers (36%) and Gen Xers (29%).
  • Half of US adults (51%) believe AI-generated and edited content needs better labeling. 
  • One in five (21%) believe AI content should be prohibited on social media, with no exceptions.

Watch this: AI Is Indistinguishable From Reality. How Do We Spot Fake Videos?

US adults don’t feel they can spot AI media

Seeing is no longer believing in the age of AI. Tools like OpenAI’s Sora video generator and Google’s Nano Banana image model can create hyperrealistic media, with chatbots smoothly assembling swaths of text that sound like a real person wrote them. 

So it’s understandable that a quarter (25%) of US adults say they aren’t confident in their ability to distinguish real images and videos from AI-generated ones. Older generations, including Boomers (40%) and Gen X (28%), are the least confident. If folks don’t have a ton of knowledge or exposure to AI, they’re likely to feel unsure about their ability to accurately spot AI.

Advertisement

People take action to verify content in different ways

AI’s ability to mimic real life makes it even more important to verify what we’re seeing online. Nearly three in four US adults (72%) said they take some form of action to determine whether an image or video is real when it piques their suspicions, with Gen Z being the most likely (84%) of the age groups to do so. The most obvious — and popular — method is closely inspecting the images and videos for visual cues or artifacts. Over half of US adults (60%) do this. 

But AI innovation is a double-edged sword; models have improved rapidly, eliminating the previous errors we used to rely on to spot AI-generated content. The em dash was never a reliable sign of AI, but extra fingers in images and continuity errors in videos were once prominent red flags. Newer AI models usually don’t make those pedestrian mistakes. So we all have to work a little bit harder to determine what’s real and what’s fake.

ai-slop-cnet-survey-actions-taken.png

You can look for discrepancies and labels to identify AI content.

Advertisement

Cole Kan/CNET/Getty Images

As visual indicators of AI disappear, other forms of verifying content are increasingly important. The next two most common methods are checking for labels or disclosures (30%) and searching for the content elsewhere online (25%), such as on news sites or through reverse image searches. Only 5% of respondents reported using a deepfake detection tool or website.

But 25% of US adults don’t do anything to determine if the content they’re seeing online is real. That lack of action is highest among Boomers (36%) and those in Gen X (29%). This is worrisome — we’ve already seen that AI is an effective tool for abuse and fraud. Understanding the origins of a post or piece of content is an important first step to navigating the internet, where anything could be falsified.

Half of US adults want better AI labels

Many people are working on solutions to deal with the onslaught of AI slop. Labeling is a major area of opportunity. Labeling relies on social media users to disclose that their post was made with the help of AI. This can also be done behind the scenes by social media platforms, but it’s somewhat difficult, which leads to haphazard results. That’s likely why 51% of US adults believe that we need better labeling on AI content, including deepfakes. Support was strongest among Millennials and Gen Z, at 56% and 55%, respectively.

Advertisement
attitudes-ai-slop-cnet-survey.png

Very few (11%) found AI content useful, informative or entertaining.

Cole Kan/CNET/Getty Images

Other solutions aim to control the flood of AI content shared on social media. All of the major platforms allow AI-generated content, as long as it doesn’t violate their general content guidelines — nothing illegal or abusive, for example. But some platforms have introduced tools to limit the amount of AI-generated content you see in your feeds; Pinterest rolled out its filters last year, while TikTok is still testing some of its own. The idea is to give every person the ability to permit or exclude AI-generated content from their feeds.

But 21% of respondents believe that AI content should be prohibited on social media altogether, no exceptions allowed. That number is highest among Gen Z at 25%. When asked if they believed AI content should be allowed but strictly regulated, 36% said yes. Those low percentages may be explained by the fact that only 11% find AI content provides meaningful value — that it’s entertaining, informative or useful — and that 28% say it provides little to no value.

How to limit AI content and spot potential deepfakes

Your best defense against being fooled by AI is to be eagle-eyed and trust your gut. If something is too weird, too shiny or too good to be true, it probably is. But there are other steps you can take, like using a deepfake detection tool. There are many options; I recommend starting with the Content Authenticity Initiative‘s tool, since it works with several different file types. 

Advertisement

You can also check out the account that shared the post for red flags. Many times, AI slop is shared by mass slop producers, and you’ll easily be able to see that in their feeds. They’ll be full of weird videos that don’t seem to have any continuity or similarities between them. You can also check to see if anyone you know is following them or if that account isn’t following anyone else (that’s a red flag). Spam posts or scammy links are also indications that the account isn’t legit.

If you want to limit the AI content you see in your social feeds, check out our guides for turning off or muting Meta AI in Instagram and Facebook and filtering out AI posts on Pinterest. If you do encounter slop, you can mark the post as something you’re not interested in, which should indicate to the algorithm that you don’t want to see more like it. Outside of social media, you can disable Apple Intelligence, the AI in Pixel and Galaxy phones and Gemini in Google Search, Gmail and Docs

Even if you do all this and still get occasionally fooled by AI, don’t feel too bad about it. There’s only so much we can do as individuals to fight the gushing tide of AI slop. We’re all likely to get it wrong sometimes. Until we have a universal system to effectively detect AI, we have to rely on the tools we have and our ability to educate each other on what we can do now.

Methodology

CNET commissioned YouGov Plc to conduct the survey. All figures, unless otherwise stated, are from YouGov Plc. The total sample size was 2,530 adults, of which 2,443 use social media. Fieldwork was undertaken Feb. 3 to 5, 2026. The survey was carried out online. The figures have been weighted and are representative of all US adults (aged 18 plus).

Advertisement

Source link

Continue Reading

Tech

What You’ll Find Inside a Massive Water-Cooled Intel 3000W Power Supply

Published

on

3000W Intel Reference Unit Water-Cooled Power Supply
A 3000W power supply often conjures up images of huge metal boxes with noisy fans pumping air over massive heat sinks, but this Intel reference unit blasts that idea out of the water, or rather keeps it cool, due to a water cooling system that keeps everything nice and clean.



Intel engineers designed this beast of a device for data centers and servers, where squeezing as much power as possible into a small space is the name of the game. It accepts a large 240-volt AC and outputs a reasonable 12 volts DC at up to 250 amps, suitable for 3000 wattage.Water flows through the back of the unit via quick-release fittings, carrying heat away without the need for a noisy internal fan or traditional heat sink fins, and the entire thing remains sealed and silent with no internal moving parts, relying on external coolant to keep things from overheating.

Sale


ASUS ROG Xbox Ally – 7” 1080p 120Hz Touchscreen Gaming Handheld, 3-month Xbox Game Pass Premium…
  • XBOX EXPERIENCE BROUGHT TO LIFE BY ROG The Xbox gaming legacy meets ROG’s decades of premium hardware design in the ROG Xbox Ally. Boot straight into…
  • XBOX GAME BAR INTEGRATION Launch Game Bar with a tap of the Xbox button or play your favorite titles natively from platforms like Xbox Game Pass…
  • ALL YOUR GAMES, ALL YOUR PROGRESS Powered by Windows 11, the ROG Xbox Ally gives you access to your full library of PC games from Xbox and other game…

Inside, things get a little more tricky because the enclosure is a solid black box with no vents to speak of. When the screws are removed, you’ll discover two chunky circuit boards jammed together with a large cold plate crammed in between. The components all push against the cold plate using thermal pads, and the circulating fluid then transports the heat to the outside world. The configuration works well to split chores, with one board handling power factor adjustment on the input side and the other handling the primary DC-DC conversion bit.

Advertisement

3000W Intel Reference Unit Water-Cooled Power Supply
Input protection is the first priority, with fuses, surge arrestors, metal oxide varistors, gas discharge tubes, and common-mode chokes all doing their part to keep unpleasant little shocks from the mains supply from causing issues. Following that, bulk capacitors are used to smooth out the rectified line voltage, ensuring that it is not unsteady or unpredictable. The power factor adjustment step, on the other hand, employs a neat technique known as an interleaved totem-pole topology to reduce ripple and enhance efficiency. The GaN transistors in that part are from Texas Instruments and have a reasonable 600 volt rating. They are also fast-switching (which is good) and low-loss, enabling it to achieve the unit’s 80 Plus Platinum certification

3000W Intel Reference Unit Water-Cooled Power Supply
The power is subsequently transferred to the phase-shifted full-bridge section on the other board. Silicon carbide MOSFETs are the high-voltage switching components here, paire with a transformer that graciously lowers down the voltage for us. The litz wire on the main side is wound to reduce signal loss as the frequency increases, while synchronous rectifiers on the secondary side work their magic to provide even greater efficiency. The output filtering is then provided by a large inductor and 9000 microfarads of polymer capacitors, all working together to provide a clean 12 volts DC at the output.

3000W Intel Reference Unit Water-Cooled Power Supply
Of course, there have to be some control circuits in there somewhere to keep things running smoothly; a Texas Instruments C2000 microcontroller manages the PFC stage, while a PIC24 monitors overall supervision. To keep things nice and safe, there are digital isolators (which are really brilliant, in my opinion) and auxiliary flyback converters that provide standby power as necessary.

3000W Intel Reference Unit Water-Cooled Power Supply
Some of the design choices on show here are quite impressive, as the cold plate design allows the unit to chug out a respectable 2500 watts per liter, as that’s a whopping 250 times more power per liter than the average air-cooled supplies, which might only scrape in at around 800 watts per liter, but using advanced semiconductors like GaN and SiC really helps to keep losses to a minimum. However, it’s the water block that still proves essential at full load where losses total around 300 watts.
[Source]

Source link

Continue Reading

Tech

The EU’s strategic rebalancing of research partnerships with China

Published

on

In 2026, one of Europe’s most ambitious scientific ventures, Horizon Europe, a seven-year, roughly €93 billion framework dedicated to research and innovation, underwent a quiet but significant transformation. 

What had once been an open invitation to researchers across the globe now carries a more guarded tenor. 

In critical areas such as artificial intelligence, semiconductors, quantum technologies, and biotechnology, organisations based in China are no longer automatically eligible to receive EU funding, a sharp deviation from earlier years when Chinese participation was possible, albeit under evolving conditions. 

This change is neither arbitrary nor purely technical. It reflects the culmination of years of negotiation and strategic signalling in Brussels. 

Advertisement

According to the European Commission’s own international cooperation guidance, cooperation with third countries like China has always been conditional; Chinese researchers may contribute, but they are required to enter as Associated Partners and often must bring their own funding where EU funding does not automatically apply. 

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

Yet the updated participation rules go further. 

Advertisement

In late 2025, the Commission codified conditions that essentially block Chinese institutions from receiving core Horizon Europe grants in sensitive clusters of research and innovation. 

In policy terms, the threshold for inclusion has shifted: European partners must now demonstrate that their collaborators are not owned or controlled by Chinese entities, creating de facto barriers for significant portions of bilateral work in cutting-edge fields. 

While cooperation is not extinguished outright, joint work continues in areas like climate science and agriculture under bilateral road-map mechanisms; this recalibration is telling. 

It amounts to Europe drawing boundaries around where it will share its most prized scientific infrastructure and intellectual capital and where it will withhold it. 

Advertisement

The official justifications, as framed by Commission texts, lean heavily on concerns about research security, intellectual property protection, and the perceived risk of unintended transfers of strategic technology where civil and military boundaries blur. 

Viewed in isolation, these adjustments might read as bureaucratic fine-tuning. But in the broader context of EU policy, which straddles an ambition for open scientific cooperation and an emergent emphasis on strategic autonomy, they underscore a fundamental tension. 

Europe still champions collaborative discovery across borders, yet it acknowledges nowadays research ecosystem is intertwined with global power dynamics in once unimaginable ways.

Beyond the sharp edges of eligibility rules lies a deeper question: why does this particular rebalancing matter in practice

Advertisement

Over the past decade, China has become increasingly visible in global scientific networks. Its researchers regularly co-authored papers with European counterparts, and its rapidly expanding domestic science base, often supported through state mechanisms, moved from peripheral to central positions in disciplines ranging from materials science to computational biology. 

Yet in the architecture of Horizon Europe that emerges in 2026, participation is no longer synonymous with access to EU funds.

Chinese entities still can contribute to research proposals, but they do so as Associated Partners and typically must bring their own financing, a distinction that subtly but fundamentally changes the incentives and power dynamics of collaboration.

In practical terms, the new rules change how research consortia form and operate. European institutions seeking to work at the frontier of emerging technologies must now factor in eligibility constraints when structuring partnerships

Advertisement

Where once multinational consortia could mix researchers from across continents with minimal procedural friction, they now must design collaborations that either exclude certain partners from funding streams or justify their presence through alternative mechanisms. 

This places a renewed premium on legal expertise, consortium management, and alignment with EU strategic priorities, an additional administrative layer that did not exist to the same degree in earlier cooperation frameworks. 

These restrictions could have unintended intellectual or scientific consequences. When large research systems are pushed to the margins, there is a risk that parallel ecosystems evolve, with reduced interoperability between them. In the long term, this could alter citation networks, collaborative norms, and research mobility patterns. 

It could also prompt other powerful actors to adopt similar measures, reshaping the landscape of global science into distinct blocs defined by policy fences rather than open inquiry.

Advertisement

It’s important to emphasise that the EU has not abandoned bilateral scientific engagement outright.

Mechanisms outside Horizon Europe,  including mobility schemes and targeted co-funding instruments designed to support researcher exchanges, continue to exist, and cooperation on transnational challenges such as climate change and biodiversity remains active. 

What has changed is the weight of strategic calculation in decisions about where and how to invest EU funding. As a result, science policy in Europe now sits at the intersection of research excellence, economic sovereignty, and geopolitical strategy.

For Europe’s research community, this presents a complex set of questions. Does tighter control over strategic collaborations strengthen the European innovation base? Or does it risk isolating European science from talent and knowledge flows? 

Advertisement

The answer is unlikely to be binary. 

What is clear, however, is that Horizon Europe,  once known chiefly as a vehicle for excellence and discovery, is now also a mirror of shifting geopolitical realities, showing how science policy has become part of broader efforts to navigate uncertainty in a multipolar world.

In the end, the EU’s decision to redraw the terms of research partnership with China feels less like a closing door and more like a recalibration of Europe’s compass. It acknowledges a world in which scientific discovery and geopolitical currents are no longer parallel tracks but deeply intertwined. 

The Horizon Europe programme, once the grand symbol of open scientific cooperation, now also stands as a marker of strategic foresight,  a space where Europe seeks to balance openness with caution, curiosity with control.

Advertisement

This turning point doesn’t signal a retreat from global engagement. 

What it does reflect is a modern realpolitik of research: where funding decisions are informed not only by scientific merit but by questions of security, reciprocity, and long-term technological sovereignty. 

In a scenery defined by rising competition over frontier technologies, Europe is choosing to hedge its bets, opening some doors wider, while tightening others. The future of scientific collaboration may be neither total isolation nor full openness but a nuanced choreography between cooperation and strategic self-interest.

Advertisement

Source link

Continue Reading

Tech

Russian hacker uses multiple AI tools to break hundreds of firewalls

Published

on


  • Russian hacker brute-forced FortiGate firewalls using weak credentials
  • AI-generated scripts enabled data parsing, reconnaissance, and lateral movement
  • The campaign targeted Veeam servers; attacker abandoned hardened systems

A Russian hacker was recently seen brute-forcing their way into hundreds of firewalls – but what makes this campaign really stand out is the fact that the seemingly low-skilled threat actor was able to pull off the attacks with the help of Generative Artificial Intelligence (GenAI).

In a new analysis, Amazon Integrated Security CISO CJ Moses explained how researchers observed a threat actor “systematically” scanning for exposed FortiGate management interfaces across ports 443, 8443, 10443, and 4443.

Advertisement

Source link

Continue Reading

Tech

Tofu brine could power safer batteries that last decades, researchers say

Published

on


The design replaces the complex, flammable chemistry of lithium-ion batteries with an electrolyte that’s as safe as saltwater. In lab tests, the prototype endured more than 120,000 charge cycles, an endurance record that far exceeds today’s commercial standards. Typical electric-vehicle batteries degrade after just a few thousand cycles – even…
Read Entire Article
Source link

Continue Reading

Tech

Best Puffer Jacket (2026): Patagonia, Arc’teryx, REI

Published

on

Mountain Hardware’s Ghost Whisperer UL hoodie has been a popular pick among ultralight backpackers since it was introduced a few years ago. It remains the best puffer jacket for anyone trying to shave every last ounce off their pack weight. It weighs just 6.7 ounces for a men’s medium (7.3 ounces for the men’s large I tested), packs down to a tiny little thing (stuffing into its own pocket), and the 1,000-fill-power goose down offers one of the best warmth-to-weight ratios on the market. The very lightweight shell material is a mix of 5D and 7D ripstop nylon, which is a bit more fragile than heavier jackets, but it has held up well so far in my testing. I can safely say that the Ghost Whisper UL is everything I ever wanted in an ultralight down puffer and then some.

What sets it apart from some other very nice puffers out there are the little details. First there’s the 1.9 ounces of 1,000-fill-power down, which is as high a fill power as you’ll find in a jacket of this class, meaning you’re getting the maximum warmth and loft that you can for the least amount of weight. My only caveat for this jacket would be, if you are the type of person who gets cold easily, you might want something with a bit more fill. The classic Ghost Whisperer Down Hoody (not the ultralight) has 3 ounces of 800 fill power and is slightly warmer in most scenarios, the trade-off being it’s heavier as well (about 9 ounces for a men’s medium). Also check out the Katabatic Gear puffer below, which is considerably warmer. I do not get cold easily, and I have found the Ghost Whisperer UL works well for me as a warm layer to throw on in camp at high elevation is summer, a mid-layer for hiking in cold conditions, and a mid-layer under the Rab Glaceon Pro in extreme cold.

Other details that make the Ghost Whisperer UL our top pick for ultralight hiking include two very nice zippered hand pockets with a good amount of space to stash little stuff like a three-season hat and some gloves, along with an adjustable drawstring at the waist to keep drafts out. I also love how small this thing packs down, well under the size of a 1L bottle (see photo). It packs into its own left pocket with a reversible zipper, although it will stuff down even smaller if you get a separate stuff sack.

My only gripe about this jacket is that there’s no drawstrings. The hood, cuffs, and waist hem are all elastic. This works fine for the cuffs and hood, but I wish there were a drawstring for the waist. For this reason, if I am expecting temps below 40, I bring a heavier puffer. The rest of the time, this is what you’ll find in my backpack. Note that I found the fit to be a little small. According to the fit guide on the Mountain Hardware website, I am right between medium and large. I tried both and found the large fit much better.

Advertisement
Specs
Down fill power 1,000
Fill weight 1.9 oz.
Weight 6.7 oz.

Source link

Continue Reading

Tech

Which Luxury Brand Has Lower Maintenance Costs?

Published

on





Purchasing a new car is hardly an easy task, even if you’re shopping in the more budget-friendly quarters of the market. But the process can no doubt be even more daunting for folks shopping in the luxury vehicle category, as there tends to be more money at stake up front. If you are eyeing a new ride in that corner of the market, there’s a likelihood that vehicles from BMW and Toyota’s luxury shingle, Lexus, are on your radar. 

Those vaunted auto brands have essentially become permanent fixtures on yearly lists, amassing the best-selling luxury brands. If you’ve been comparing those automotive brands yourself, you likely noticed that, at least at the point of purchase, BMW models will likely cost you a few more Benjamins than their Lexus counterparts. But in the luxury automobile sector, maintenance should also factor heavily in your decision-making process, as it can be expensive to keep those vehicles looking and running the way any owner would expect from a high-priced ride.

Advertisement

It can, however, be difficult to properly determine maintenance costs on your own. As such, consumer ratings factions like Consumer Reports (CR) can be invaluable in helping you crunch the numbers. And according to CR, in the long run, the estimated cost of maintaining a BMW may be considerably more than that of a Lexus. For the record, several other factions — including SoFi and CarEdge — also rank Lexus well ahead of BMW in this category, even as there may be more to the numbers to consider.

Advertisement

The maintenance numbers are tricky between Lexus and BMW

Given Lexus’s ties to the Toyota brand, it’s not entirely shocking that the brand is cheaper to maintain. Save for a few recent issues, Toyota is well-known in the automobile arena for reliable vehicles that don’t cost much to properly maintain. To that end, both SoFi and CarEdge rank Lexus as one of the best luxury options on the market in terms of maintenance costs.

Though numbers vary, Consumer Reports estimates tell the same story. But the numbers aren’t as cut and dry as you might think. In fact, per CR’s estimates, over the first five years of ownership, a Lexus might cost more to properly maintain at a potential cost of $1,800 to BMW’s $1,700. It’s in years six through 10 that things shift dramatically, however, with CR estimating it may cost $9,300 to maintain a BMW and $5,600 for a Lexus. Consumer Reports’ 10-year estimates break down to $11,000 for BMW and $7,400 for Lexus. While the other noted survey factions claim that cost could run closer to $16,000 or more for the German brand, the overall Lexus numbers are more in-line CR’s estimate.

There is one caveat to consider regarding these numbers, in that CR reportedly only accounts for costs that are paid directly by the vehicle’s owner, effectively ignoring services covered by the manufacturer in complimentary maintenance plans. That may account for variances in the cited estimates. And with new BMWs getting three years of complimentary service from the manufacturer in comparison to Lexus’s one-year plan, the overall numbers could see a notable shift. 

Advertisement



Source link

Advertisement
Continue Reading

Tech

How To Properly Wash A Microfiber Cloth After Cleaning Your Car

Published

on





Microfiber cloths are an eco-friendly and versatile cleaning tool that many of us keep around the house. They serve a multitude of purposes, from wiping down your countertops and polishing your eyeglasses to cleaning your electronics and detailing your car. They’re a great choice when you want to polish a surface without scratching it, and many people use them to hand wash their cars. But what’s the best way to clean them after you use them, or even after they’ve simply hit the ground?

Microfiber cloths, as the name implies, are made up countless synthetic fibers that carry an electrostatic charge. It’s this charge that attracts dirt so well; maintaining that charge means washing them in ways that don’t limit the cloth’s abilities. Thankfully, you don’t need to wash those microfiber cloths by hand — your washing machine will do the job. However, if the microfiber towels are extremely dirty, you may want to give them a rinse in the sink beforehand. That being said, you should wash them on their own and not with other clothing or towels, especially anything that’s cotton or wool, as this can make it harder for all those synthetic fibers to keep attracting dust and dirt.

Once you toss those microfiber cloths safely into the washing machine, set the water to warm, which will help loosen the dirt and clean them more thoroughly. Use gentle detergent and avoid fabric softeners and bleach, which can also damage the fibers. Stick to the gentle or normal cycle to minimize wear and tear caused by heavy agitation. If you want to ensure that the cloths are clean, you can add an extra rinse to the cycle. Once the microfiber cloths come out of the washer, you can either allow them to air dry or throw them in the dryer on low heat. Once they’re clean and dry, they should be ready for another round!

Advertisement

How and why to use microfiber to clean your car

Visit the car detailing aisle at your local big box store and you’ll be greeted with a multitude of sponges, washing mittens, and towels. However, the pros often choose microfiber because it’s both soft and dense, and all those small fibers it’s made of easily pick up tiny dirt particles without being abrasive. They’re also very absorbent and don’t tend to leave behind streaks or lint, making them a better choice for washing your vehicle than sponges or an old rag. They’re also cost-effective, as you can use them again and again as long as you clean them properly.

To avoid cross-contamination and the potential for scratches, use a separate microfiber cloth for each section of your car when you wash it, including the tires, the body, and the windows. Glass-specific microfiber cloths for your windshield can be particularly helpful in keeping it free of streaks. Never use the cloth on dry paint; always rinse your vehicle first to remove loose dirt and reduce the risk of scratches. You may want to use one bucket for soapy water and another with clean water to rinse the cloth as you wash. Instead of using a circular motion, wipe in straight lines when using a microfiber cloth, as this helps reduce any marks that may be left over when you’re done.

Advertisement

To dry your vehicle, use another clean cloth. Instead of wiping, gently pat with the cloth, which again will help you avoid scratches, smudges, and streaks. Finally, never reuse a microfiber cloth without washing them — even if they look clean, dirt trapped in the cloth may lead to scratched paint the next time around.



Advertisement

Source link

Continue Reading

Tech

I spent a day at an elite hi-fi show to pick out 6 affordable speakers and hi-res players even I’d buy, so maybe you can too

Published

on

I saw some of the best stereo speakers while wearing my boots down at Bristol Hi-Fi Show between February 20-22, and I wrote a whole listicle on some of the premium audio gadgets which will absolutely shatter your bank account.

But at the annual show, I also saw a decent amount of kit that wasn’t so wallet-shredding. I enjoyed listening to plenty of headphones and speakers which you and I could even end up buying — I mean, if my landlady decided I could be exempt from rent for a month or two.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025