Connect with us
DAPA Banner

Tech

The Samsung Galaxy Z TriFold is now available to buy at the official Samsung Store with a price tag that will make your bank account cry

Published

on

The much-awaited Galaxy Z TriFold is now available to purchase at the official Samsung store for the princely sum of $2,899. Yikes, that’s an eye-watering amount for sure, but one that we were very much expecting.

Unlike with previous launches, there doesn’t appear to be any trade-in rebates or plan tie-in deals, but you can get a bunch of discounted accessories. Samsung is also including a few limited-time subscriptions to its Health Premium app, Adobe Lightroom, and Google Gemini, to name just a few.

Advertisement

The Samsung Galaxy Z TriFold feels like the culmination of everything Samsung has learned since the brand first debuted its foldable devices. We saw the TriFold in person at CES 2026, and it’s an impressively thin, relatively lightweight device that almost impossibly unfolds into a near-flat 10-inch tablet. It’s a striking reminder of how far foldables have come in just a few years, and it could just replace both your phone and your tablet.

Much of that success comes down to thoughtful engineering. The TriFold uses a two-hinge design that fully protects the flexible display when folded. Unfolded, the device lies almost completely flat, with barely visible creases and a screen that’s ideal for multitasking, productivity, and even serving as a secondary display for a PC. Both the 6.5-inch cover screen and the expansive inner display are bright, responsive, and well-suited to Samsung’s Galaxy AI features, which benefit noticeably from the extra screen real estate.

Advertisement

Under the hood, the TriFold shares a lot of the same components as the Fold 7. It features a Snapdragon 8 Elite for Galaxy chipset, 16GB of RAM, 512GB of storage, and a primary 200MP camera with ultrawide and telephoto lenses. These are very much flagship-level specs, so expect excellent performance here, although we can’t fully report on that until we’ve extensively tested the device. Stay tuned for our full review.

Swipe to scroll horizontally
Samsung Galaxy TriFold specs
Header Cell – Column 0

Samsung Galaxy Z Trifold

Samsung Galaxy Z Fold 7

Dimensions (folded):

Advertisement

75.0 x 159.2 x 12.9mm

72.8 x 158.4 x 8.9mm

Dimensions (unfolded):

214.1 x 159.2 x 3.9mm (center screen only)
Button side: 4.0mm
SIM tray side: 4.2mm

Advertisement

143.2 x 158.4 x 4.2mm

Weight:

309g

215g

Advertisement

Main display:

10-inch QXGA+ Dynamic AMOLED 2X

(2160 x 1584 – 269ppi), adaptive refresh rate (1-120Hz)

8-inch QXGA+ Dynamic AMOLED

Advertisement

(2184 x 1968), adaptive refresh rate (1~120Hz)

Cover display::

6.5-inch FHD+ Dynamic AMOLED 2X

(2520 x 1080 422ppi), adaptive refresh rate (1-120Hz)

Advertisement

6.5-inch FHD+ Dynamic AMOLED

2x display (2520 x 1080, 21:9), adaptive refresh rate (1~120Hz)

Chipset:

Qualcomm Snapdragon 8 Elite for Mobile Platform for Galaxy

Advertisement

Qualcomm Snapdragon 8 Elite for Mobile Platform for Galaxy

RAM:

16GB

12GB / 16GB (1TB model only)

Advertisement

Storage:

512GB

256GB / 512GB / 1TB

OS:

Advertisement

Android 16 / One UI 8

Android 16 / One UI 8

Primary camera:

200MP f1.7

Advertisement

200MP f1.7

Ultrawide camera:

12MP f2.2

12MP f2.2

Advertisement

Telephoto

3x 10MP f2.4

3x 10MP f2.4

Cover Camera:

Advertisement

10MP f2.2

10MP f2.2

Inner Camera:

10MP f2.2

Advertisement

10MP f2.2

Battery:

5,600mAh

4,400mAh

Advertisement

Charging:

50% in 30 mins with 45W fast charger (wired)

30 mins with 25W adapter (wired)

Colors:

Advertisement

Crafted Black

Blue Shadow, Silver Shadow and Jetblack [Samsung.com Exclusive] Mint

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Some of Samsung’s best new Galaxy S26 AI features are headed to the S25

Published

on

Samsung is apparently not keeping all of the best new Galaxy AI tricks locked to the Galaxy S26 series. The company has confirmed that it is working on a software update that will bring over newer AI features that were first introduced on the latest flagship lineup.

How Samsung’s community backlash worked

After the Galaxy S26 announcement, reports hinted that new software features may not trickle down to the Galaxy S25 series, which led to backlash from the community. But a Samsung Community moderator shared a post that thanked users for their feedback and confirmed that certain features are arriving on older flagships.

Breaking 🔥

Galaxy S25 Series users 👋

A moderator confirms a new update is coming, bringing Galaxy S26 AI features:

Advertisement

• Call Screening
• User-friendly enhancement improvements

REPOST 🔁 pic.twitter.com/vmvgxbAw6z

— Tarun Vats (@tarunvats33) April 6, 2026

What’s coming with the new update?

The biggest confirmed addition so far is AI-powered call screening. Samsung specifically mentioned this feature in the moderator message. To recall, this feature debuted with One UI 8.5 on the Galaxy S26 series, and is now planned for the Galaxy S25, Galaxy S25+, and Galaxy S25 Ultra via a future update. So, the brand may have changed its decision following backlash from Galaxy S26 owners who were unhappy about the possibility of missing out on some of Samsung’s latest AI tools.

Advertisement

Finer details of the big update are still at large, and Samsung hasn’t published a final list of every Galaxy S26 AI feature that is coming to the S25 series yet. The company has promised “additional features and usability improvements” for the Galaxy S25 series, suggesting call screening may not be the only upgrade in the pipeline.

The One UI 8.5 update is expected to arrive as the next major stable update, though Samsung hasn’t announced a rollout date. But at least part of the Galaxy S26’s AI packaging is heading to the Galaxy S25 models.

Source link

Advertisement
Continue Reading

Tech

Climate-tech startup Satellites on Fire has raised $2.7M

Published

on

Satellites on Fire, founded in 2020 as a school project by three Argentine teenagers, has closed a seed round led by Dalus Capital. Its software-only platform integrates satellite data from multiple agencies and detects fires faster than NASA’s FIRMS system by avoiding the gaps between satellite passes.


Argentine climate-tech startup Satellites on Fire has closed a $2.7 million seed round led by Dalus Capital, with participation from Draper Associates, Draper Cygnus, VitaminC, Savia Ventures, Avesta Fund, Reciprocal, Zenani Capital, Innventure, Air Capital, Gain VC, Antom VC, and Embarca Tech.

The company builds an AI-powered wildfire detection platform that integrates satellite imagery, tower cameras, fire propagation modelling, and real-time alerts, and says its system detects fires on average 35 minutes ahead of NASA’s FIRMS service.

The company was founded in 2020 by Franco Rodriguez Viau, Ulises López Pacholczak, and Joaquín Chamo, then secondary school students at ORT Buenos Aires, after family friends of Rodriguez Viau lost their homes to wildfires in Córdoba.

Advertisement

What began as a school project was rebuilt from scratch after the founders interviewed more than 80 firefighters and emergency responders and concluded their first version was not operationally useful. Rodriguez Viau is now 22 and serves as CEO.

MIT Technology Review’s Spanish edition named him among its 35 Innovators Under 35 for Latin America in 2025.

The platform’s edge over existing systems lies in satellite coverage density. NASA’s FIRMS service draws on a smaller number of satellites with revisit intervals that can leave multi-hour gaps over Latin American territories.

Satellites on Fire aggregates imagery from more than eight satellites across NASA, NOAA, and the European Space Agency, updated as frequently as every five minutes, and applies its own AI models to detect heat signatures and generate spread simulations.

Advertisement

The result, the company says, is detection that consistently precedes NASA alerts by around 35 minutes, which it describes as the critical window for effective early containment. Newsweek reported in November 2025 a documented case in Argentina where the system detected a fire at 1:40 a.m., seven hours ahead of NASA’s alert.

The commercial model is software-as-a-service, with pricing ranging from $0.02 to $10 per hectare annually depending on service tier. The platform currently monitors territory across 21 countries on four continents, with more than 55,000 users and a training dataset built from over 20,000 field-validated fire reports, which the company describes as the largest such database in Latin America.

In 2025, the system was involved in the response to more than 600 wildfires, according to the company. Clients include forestry companies, agricultural enterprises, energy utilities, carbon credit projects, insurers, and government agencies. Aon has integrated the platform into all of its forestry insurance policies across Latin America for risk calculation and premium pricing.

The new capital will fund expansion into the United States market, where the company is already running pilots and has a partnership with Watch Duty, the non-profit wildfire tracking platform.

Advertisement

It will also be used to optimise AI models, launch a parametric wildfire insurance product in partnership with Aon, and build an intelligence dashboard for client protection planning.

Rodriguez Viau has previously said the company intends to eventually move into suppression technology using drones. The US is the primary new target: wildfires are estimated to cost the country hundreds of billions of dollars annually, and the 2025 Los Angeles fires sharpened political and commercial attention on detection gaps.

John Mills, CEO of Watch Duty and an advisor to Satellites on Fire, said the platform’s results with existing satellite data had ‘genuinely astounded’ his team. Diego Serebrisky, co-founder and managing partner at lead investor Dalus Capital, framed the round as evidence that Latin American founders are producing globally competitive AI solutions in climate.

The company previously received $250,000 from Tim Draper and Adam Draper after appearing on Meet the Drapers Season 9, and has also received recognition from the UN and support from MIT and Cornell University at earlier stages.

Advertisement

Source link

Continue Reading

Tech

Startup Battlefield 200 applications open until May 27

Published

on

Pre-Series A founders and anyone who knows a startup worth funding, this is your reminder. Nominations for Startup Battlefield 200 are open, and the strongest contenders are already stepping forward. If your startup was nominated, don’t stop there. Submit your application today.

This is not just another pitch opportunity. You are stepping onto the main stage in front of 10,000+ attendees, top-tier VCs, and the global TechCrunch audience at TechCrunch Disrupt 2026. You are competing, getting live feedback from top VCs, and proving your company belongs.

If you have been thinking about applying or nominating a startup, waiting is the fastest way to miss out. Founders who move early gain the edge with more time to prepare, more visibility, and a stronger shot at standing out to the TechCrunch editorial team. Make your nomination and finish the submission by applying.

TechCrunch Disrupt 2025 Startup Battlefield
Image Credits:TechCrunch

Which startups should apply?

We’re looking for early-stage startups building ambitious, innovative, and potentially category-defining products. We accept applications globally, across all industries. Most selected companies are pre-Series A, with some Series A considered on a case-by-case basis. A functional minimum viable product (MVP) and a clear product demo are required. Above all, we back strong founders and ideas with real impact.

This is the same stage where companies like Dropbox, Discord, Fitbit, Trello, and Mint made their early mark. See who else has made it big through Battlefield 200.

Advertisement

Each year, thousands apply. 200 are selected to participate. 20 reach the final round to pitch live on the Disrupt Stage. Only one champion wins. Learn more and apply here.

Kevin A. Damoa, Founder & CEO, Glīd, Claire Kroft and Ankit Malhotra, winners of the Startup Battlefield 2025, pose onstage during day three of TechCrunch Disrupt 2025 at Moscone Center on October 29, 2025 in San Francisco, California.
Image Credits:Kimberly White / Getty Images

What selected startups get

  • Global exposure across TechCrunch’s audience
  • Free exhibit table for all 3 days
  • 4 all-access Disrupt passes
  • Featured startup profile in the event app
  • Press list access and lead generation opportunities
  • Exclusive founder masterclasses
  • A chance to pitch live on the Disrupt Stage
  • Direct feedback from top VCs
  • A shot at $100,000 in equity-free funding

Apply for Startup Battlefield 200 today

Applications close May 27, but the founders who win do not wait. They move early and take their shot before the competition catches up.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

Advertisement

If you are building something that could define a category, or know a founder who is, this is your moment. Nominate your startup or one that belongs in the arena. If nominated, submit your application. Don’t sit on the sidelines and miss your shot.

Source link

Advertisement
Continue Reading

Tech

With Affordable Storage Options Dwindling, Where To Store Our Data?

Published

on

These days our appetite for more data storage is larger than ever, with video files larger, photo resolutions higher, and project files easily zipping past a few hundred MB. At the same time our options for data storage are becoming more and more limited. For the longest time we could count on there always being a newer, roomier, faster, and cheaper form of storage to come along, but those days would seem to be over.

We can look back and laugh at low capacity USB Flash drives of the early 2000s, yet the first storage drive to hit 1 TB capacity did so in 2007, with a Hitachi Deskstar 7k100, only for that level of capacity in PCs to not really be exceeded nineteen years later.

We also had Blu-ray discs (BD) promise to cram the equivalent of dozens of DVDs onto a single BD, with two- and even four-layer BDs storing up to a one-hundred-and-twenty-eight GB. Yet today optical media is dying a slow death as the sole remaining cheap storage option. NAND Flash storage has only increased in price, and the options for those of us who have large cold storage requirements would seem to be decreasing every day.

So what is the economical solution here? Invest in LTO tapes using commercial left-overs, or give up and sign up for Cloud Storage™ for the low-low price of a monthly recurring fee?

Advertisement

It’s Not Hoarding, I Swear

Although there are many people today who use just a lightweight laptop with something like 256 GB of storage in it without any complaints, the problem would seem to lie mostly with those who are really into having local and offline data. This can include things like multimedia content, but also project files and resources, which especially in the case of video editing and game development can quickly balloon into pretty serious size requirements.

Over the decades of memory storage, there’s been a near-constant flurry of new innovations and technologies, always with the knowledge that in a decade there would be massively larger forms of storage or at least big price drops to look forward to. This is how my first late 90s PC didn’t have just a zippy Celeron 400 CPU, but also a massive 4 GB HDD.

Compared to the 30 MB HDD in the 386-based system that I had before this was massive, but with multimedia content flooding in courtesy of the filesharing revolution, I quickly had to pop in a 10 GB HDD. By the time that I upgraded to a new PC that was considered small, and I found myself well above 20 GB, before soon joining the 1 TB and later the 5+ TB club. By 2012 HDDs were using 2 TB platters, so this was basically becoming unavoidable.

Meanwhile a lot of files were offloaded or backed up onto optical media, both CDs and DVDs. Although ZIP disks also briefly made an appearance in my PCs, optical discs were simply far cheaper and more universally usable.

The Problem

Sony was the only manufacturer of 128GB writable Blu-Ray discs. Other manufacturers topped out at 100GB.

Even before the current ‘AI’ datacenter-induced tripling of Flash storage costs that’s also affecting USB Flash drives and HDDs, optical media had been slowly phased out for a while. Even without checking sales numbers, you don’t have to be a genius to consider it a bad sign that manufacturers like Pioneer are exiting the optical storage market and big names like Sony ceased the production of recordable Blu-ray discs along with MiniDisc and MiniDV formats.

If you have recently shopped around for internal 5.25″ optical disc drives (ODDs) in particular, you may have noticed that these are becoming increasingly more rare and expensive. Even stand-alone BD player options are becoming more limited, with that distinct ‘final gasp’ vibe that comes with a dying format, as with VHS and kin in the past.

Advertisement

Part of the problem can probably be attributed to the move away by content distributors – including multimedia, games and software – from physical media to online distribution methods. This takes the form of streaming services, Software-as-a-Service (SaaS), and online game stores. At this point you do not even need a BD player in your home game console, never mind PC, to install games. Neither do you need a BD player connected to your smart TV as you can just join the new brave world of terminal-based subscriptions.

So with demand for optical media massively reduced by this shift, what’s left for those of us who just want to back up our data in peace and without shelling out too much hard-earned cash?

The Future

With the prospect of cheap DVD and BD blanks becoming a thing of the past, or unusable due to a lack of new optical drives to use them with, what options remain? We can look at metrics like cost per GB to see what might conceivably make sense.

The most recent 50-disc spindle of DVD+Rs that I purchased came in at just under €15 for 235 GB, so that’s about 6 cents/GB, and I could have gotten it much cheaper by going for larger spindles and shopping around some. For comparison, SSD storage was more than triple that even before the recent price surge, and HDDs are coming in around that same price tag as well.

A quick look at LTO tapes and drives for sale shows that while tapes for the older LTO-8 standard from 2017 are pretty reasonable, the drives cost an absolute fortune, so you’d have to be pretty lucky to score one without having to pawn off a kidney.

Advertisement

Add to this that LTO tapes are only really guaranteed for a lifespan of 15-30 years and are incomparably slower due to being a linear format. This makes tape storage only really suitable for the coldest of cold storage, and not for keeping some videos around, or for game development resources that you would like to pop in and quickly query without dying from old age while a tape seeks to the appropriate position.

The Solution?

Even assuming that the current insane surge in pricing for RAM, NAND Flash, and even HDD storage is just a temporary blip, and that by the time 2027 rolls around the RAMpocalypse will just be a bad dream to meme about, the basic economics of cost per storage would still not have changed in any measurable way.

The advantage of optical media, especially DVDs, is that they’re a very simple technology, relatively speaking. While there is some impressive technology in the optical pick-up component of an ODD, over the decades they have become highly affordable commodity devices. Meanwhile the discs are very cheap to produce, being at their core just some plastic with a coating on which the bits are written, while being very durable if kept away from physical harm.

It’s also essentially guaranteed that a DVD+R or BD-R will not have its data altered, something which cannot be guaranteed with a USB Flash drive. Filesystem corruption and electrical issues may damage or even destroy the Flash drive.

Advertisement

Although it’s easy to say that one should just ‘stop hoarding data’ or subscribe to some cloud storage solution for potentially infinite money per GB, high latency and the possibility of data loss due to a datacenter issue, there are many arguments to be made in favor of keeping local, offline copies, and that this should be done on highly durable media. We just cannot be sure that optical media will remain an option in the future.

What is your take on this conundrum? How do you manage your storage needs in this modern era, and what are your plans for the future? Please feel free to sound off in the comments.

Source link

Advertisement
Continue Reading

Tech

Data visualization all-stars unveil Ridge AI with $2.6M to fix the analytics problem for SaaS apps

Published

on

Ridge AI co-founders Jeffrey Heer and Ellie Fields. (Ridge AI Photo)

Ellie Fields and Jeffrey Heer know data visualization from the inside: Fields spent more than 12 years as a product and marketing leader at Tableau, and Heer is the University of Washington professor whose open-source tools are widely used for web-based visualization.

But even as they and their colleagues pushed the field forward, they couldn’t escape a similar conclusion: presenting and analyzing data on the web is basically still broken.

Their solution: Ridge AI, a Seattle-based startup that uses AI and browser-based technology to help software companies build and deploy interactive dashboards and data agents in hours instead of days or months, embedding them directly in their products for use by their customers.

The company calls its core product a “ridge” — a dashboard and a data agent that share a common data set, letting users get visual context from the dashboard and ask follow-up questions through the agent.

Funding: Ridge AI is emerging from stealth Monday with $2.6 million in pre-seed funding led by Madrona. The Seattle venture capital firm’s investment was spearheaded by Managing Director Tim Porter and Venture Partner Mark Nelson, the former CEO of Tableau.

Advertisement

Joining in the funding is a roster of angel investors that reads like a who’s who of analytics, AI and data: Chris Stolte, Tableau co-founder and former CTO; Carlos Guestrin, co-founder of Turi and director of Stanford’s AI Lab; Adrien Treuille, founder of Streamlit; Elissa Fink, former Tableau CMO; and Jeff Hammerbacher, Cloudera founder, among others.

Target market: Although their technology could be applied broadly, Ridge AI is focusing specifically at the outset on serving software as a service (SaaS) companies, giving them a way to present rich, interactive analytics to the people and businesses that use their products. 

In an interview, Fields said the need is especially acute when a SaaS company is trying to renew a customer’s contract. The product might be delivering real results, but if the people making the buying decision can’t see that in the data, the deal can be at risk.

“The CFO is going to be asking, is anyone even using this?” Fields said, calling it one of the use cases where Ridge AI’s technology could be of significant value to SaaS firms.

Advertisement

The pressure to prove this value has intensified amid the “SaaS-pocalypse,” as it’s known — as companies consolidate their software spending and the rise of custom AI-coded apps makes many of them question whether existing tools are worth keeping.

What they’re solving: Madrona’s Nelson said he experienced the larger problem during his time as CTO of Concur, where the company built an analytics product on top of IBM Cognos, giving customers the ability to glean insights into employee travel and spending.

It was important to the business, he said, but it was a pain to maintain, and it wasn’t in Concur’s core skillset. The problem persists for many SaaS companies to this day.

SaaS companies have historically had to choose between heavyweight business intelligence platforms like Tableau and Power BI, specialty embedded analytics tools, or building their own. Fields said none of those options was purpose-built for the problem Ridge is solving.

Advertisement

Founders: Ridge AI was co-founded by Fields, who serves as CEO, and Heer, chief scientist, who will continue as a UW professor in addition to working on the company.

Also on the team: Andy Caley, a founding engineer who previously worked at Tableau, and Fritz Lekschas, a founding research engineer with a Ph.D. from Harvard and more than 20 publications in data visualization.

From left, Madrona’s Tim Porter, Ridge AI CEO Ellie Fields, and Madrona’s Mark Nelson. (Madrona Photo)

Fields and Heer were introduced by Madrona’s Nelson and Porter. Nelson had known Fields since she worked for him at Tableau and he had separately kept in touch with Heer through his UW work. Porter, meanwhile, had gone to Stanford Business School with Fields. 

“I can’t think of two people I like more, and would bet on more, than Jeff and Ellie,” Nelson said, describing the pairing as an example of what’s possible in Seattle’s tight-knit tech community.

Heer previously co-founded Trifacta, a data transformation company acquired by Alteryx in 2022. He and his academic collaborators have produced some of the most widely used open-source tools in data visualization, including Vega(-Lite), D3.js, and the Mosaic framework that serves as Ridge AI’s technical foundation. 

Advertisement

Fields joined Tableau as its first product marketer and rose to senior vice president of product development over more than 12 years, spanning the company’s IPO and its acquisition by Salesforce. She went on to serve as chief product and engineering officer at SalesLoft, where she experienced firsthand the problem Ridge is now trying to solve.

Technology: Ridge runs in the user’s web browser rather than on a remote server, using Heer’s open-source Mosaic framework and an in-browser database called DuckDB. That architecture delivers near-instant interactivity and means the software company that embeds it doesn’t pay for cloud computing costs with every dashboard interaction. 

On the creation side, AI agents handle the visualization design, so product managers can describe what they want in business terms rather than learning a specialized tool.

What’s next: Fields said Ridge AI plans to focus on its SaaS wedge for at least a couple of years before expanding, noting that the market has historically been under-served. 

Advertisement

The company has been working with a small number of pilot customers, and is now inviting additional companies into a closed beta, accepting applications at ridgedata.ai.

Source link

Continue Reading

Tech

Laser chips promise faster, greener indoor wireless at gigabit speeds

Published

on

Indoor wireless is hitting limits as more devices crowd the same spectrum. Streaming, video calls, and smart home gear are pushing networks harder while power use rises. A new class of laser chips offers a different path by moving data onto light.

Researchers built a chip-scale optical link that delivers ultra-fast indoor connections with lower energy use. Instead of broadcasting signals widely, it sends data through controlled infrared beams, opening more usable capacity while avoiding interference in dense spaces.

At the core is a chip with 25 microscopic lasers, each carrying its own stream. Working in parallel, they push throughput far beyond a single source. In testing, the setup reached more than 360 gigabits per second across a short indoor link.

The gains are not just speed. Power use drops significantly, offering a more efficient way to handle rising demand.

Advertisement

Laser array proves the speed

Performance comes from a 5 by 5 array of vertical-cavity surface-emitting lasers, each acting as its own high-speed channel.

In tests over two meters, individual lasers delivered about 13 to 19 gigabits per second. With 21 active channels, total throughput reached 362.7 gigabits per second, among the fastest chip-scale optical results so far.

The limit came from the receiver hardware, not the transmitter, suggesting higher speeds are possible with better components.

A custom optical setup also shapes each beam into a defined square, limiting overlap so multiple links can run side by side without interference.

Why light changes the equation

Radio networks struggle in crowded spaces where signals interfere and capacity gets stretched. Light avoids those limits by offering more bandwidth and precise control over where signals go.

Advertisement

Instead of blanketing a room, the system creates a grid of targeted beams with minimal spillover. Measurements show uniform coverage across the target area, helping maintain stable performance for multiple devices.

The setup runs at about 1.4 nanojoules per bit, roughly half that of comparable Wi-Fi systems. The tradeoff is range, as the current setup works over short distances and depends on line of sight.

Where this goes next

This approach is meant to complement existing networks by offloading heavy traffic in high-demand indoor spaces.

The hardware fits on a sub-millimeter chip built with standard processes, making integration into fixtures or access points plausible, though no commercial timeline is given.

As demand rises, combining radio and light-based links could become standard, with laser systems handling the heaviest traffic.

Advertisement

Source link

Continue Reading

Tech

Why Simple Breach Monitoring is No Longer Enough

Published

on

Webz.io header

Written by Ran Geva, CEO at Webz.io & Lunarcyber.com

In 2026, stolen credentials are a top-tier security priority. They are also a paradox: even though they are considered a significant risk, enterprises still opt for checkbox solutions and generic tools to mitigate the problem. 

According to a recent survey commissioned by Lunar, a dark-web monitoring platform powered by Webz.io, 85% of organizations rank stolen credentials as a high or very high risk, with 62% saying they are in their top-three security priorities.

At the same time, I’ve spoken with dozens of organizations using Lunar’s community platform, who have told me things like, “we have MFA everywhere, so we’re covered”, and “our EDR and zero-trust stack already protects our employees.

Advertisement

They fail to realize that EDR and zero-trust measures offer no protection when an employee logs into a critical SaaS service from an unmanaged home device.  

The consequences of failing to detect stolen credentials in time can be catastrophic. According to IBM’s Cost of a Data Breach Report, a breach involving compromised credentials costs between $4.81-4.88 million.

Considering that Lunar observed 4.17 billion compromised credentials in 2025 alone, the potential global cost of these attacks is staggering. All of this means that simple breach monitoring is no longer enough.

An enterprise mindset shift is needed to create a programmatic defense strategy that tackles the ever-evolving threat of infostealers.

Advertisement

Checkbox Monitoring and The Dangers of Using Generic Solutions 

When speaking with organizations, I always ask how they mitigated the infostealer threat before onboarding Lunar. The answers I get follow the same pattern: Exposed credentials are a serious problem and we dedicated resources to solutions to mitigate the threat.

What they didn’t realize is that those solutions were lacking and mainly consisted of:  

  • A focus on data breaches instead of infostealers

  • ULPs and non-forensic infostealer data

  • Advertisement
  • High latency and stale data sources

  • No automation, integrations, or investigation capabilities 

Our research lays out just how serious the problem is. Only 32% of enterprises that we surveyed use dedicated credential monitoring solutions, while 17% have no tooling at all.

Meanwhile, more than 60% of organizations check for exposed credentials monthly, rarely, or not at all. 

We’ve seen firsthand how these solutions perform. When new organizations onboard Lunar, many are shocked to realize that while their previous tools told them that a breach had happened, they never got the tools to properly investigate how it happened.

Advertisement

The forensic details, including the accounts that were compromised, the devices infected, the SaaS apps that could be impacted, not to mention the session cookies that were stolen, were simply not there. 

While the checkbox approach is better than no security at all, it rarely provides the forensic detail that enterprises need to successfully mitigate the infostealer threat. So, what’s holding them back from scaling their operations? 

See where your company’s credentials and session cookies are already exposed.

Lunar continuously monitors breaches and infostealer logs for your domains and surfaces actionable exposures in a free, enterprise‑grade dashboard.

Advertisement

Sign Up Free

The Infostealer Threat is Much Bigger Than Enterprises Think

This is where the infostealer paradox enters into our conversations. While everyone knows about the dangers of exposed credentials, they either fail to prioritize budgets or simply don’t know what kinds of solutions successfully mitigate the problem.

Furthermore, they don’t always understand just how prevalent credential theft actually is, the environments they target, and the information they can access. 

From the 4.17 billion compromised-credential records we collected in 2025, we analyzed infostealer logs, stealer-derived combolists, marketplaces, and Telegram channels. Infostealers like LummaC2, Rhadamanthys, Vidar, Acreed, and others consistently slipped past enterprise monitoring, even in environments that considered themselves mature.

And while many new Lunar users thought that the macOS was safer than Windows, they were shocked to hear about families like Atomic macOS Stealer (AMOS), Odyssey, MacSync, MioLab, and Atlas. 

Advertisement

There is also an awareness problem regarding the data infostealers exfiltrate, which goes far beyond simple username/password pairs. With modern infostealers now sold as full-fledged products, with subscription tiers, dashboards, and documentation tuned to harvesting cookies, session tokens, and SaaS access at scale, organizations are now in a rush to catch up and protect their networks.

For threat actors, session cookies don’t just provide access. They effectively open the front door, letting them skip login pages entirely: no password prompt, no MFA challenge, and often no obvious trace in standard authentication logs.

That is the piece of the puzzle that many organizations are only now internalizing. 

What Does a Typical Infostealer Attack Look Like?

When we talk about what an infostealer attack looks like, and why checkbox security is ineffective, we often break it down into the following process: 

Advertisement
  1. Target is infected: The victim’s device is compromised by an infostealer delivered through vectors such as zero-day exploits, ClickFix campaigns, rogue browser extensions, unverified or pirated software, game mods, or malicious open-source projects.

  2. Credentials are exfiltrated: The infostealer extracts the browser for logins and cookies, including those from third-party portals, and sends them back to the malware operator.

  3. Credentials are bundled and sold: The stolen credentials are bundled into logs and sold on underground markets and private channels. 

  4. Attackers access the enterprise network: The attacker who purchases the logs accesses the target network, including third-party portals, using a valid session token. 

This entire chain of events can be completed in hours. Meanwhile, many of the organizations we speak with run credential checks once a month or rely on outdated data.

Advertisement

By the time anything shows up in their legacy monitoring tools, attackers have had plenty of time to explore and exfiltrate whatever data they want.

Developing a Mature Breach Monitoring Program

A mature breach monitoring program, like Lunar, provides continuous monitoring, automations, and integrations
A mature breach monitoring program, like Lunar, provides continuous monitoring, automations, and integrations

Organizations we work with that make the switch to a mature breach monitoring program have the tools they need to collect information from channels like stealer logs, Telegram groups, and marketplaces. Instead of relying on ad-hoc checks, they focus on three practical capabilities:  

  1. Continuous monitoring and normalization of key sources (breaches, stealer logs, combolists, marketplaces, and relevant channels), so security teams have a clear and deduplicated  view of breach exposures.

  2. Targeted automation that reduces false positives and noise, ensuring that analysts spend time on identities and sessions that actually matter. 

  3. Integrations into existing security and identity stacks (SIEM, SOAR, IDP) that execute playbooks end-to-end, resetting credentials, invalidating sessions, and blocking accounts as soon as exposures are confirmed.   

Among Lunar users, we’ve seen a clear mindset shift once they get this right. They treat the infostealer threat as its own domain, complete with ownership, metrics, and playbooks, instead of managing their breach monitoring using unrelated tools.

Advertisement

This all goes back to Lunar’s core mission, which is to provide a free breach monitoring solution to any organization, regardless of budget, that delivers enterprise-grade coverage of compromised credentials, infostealers, and session cookies.

Our philosophy is to openly provide enriched compromised credential intelligence, enabling organizations to regain true visibility and resilience.

Redefining Breach Monitoring in 2026   

Even seasoned and knowledgeable security teams can fall into the breach monitoring paradox, where they know the threat but behave as if monthly checks, MFA, and EDR are enough. But in 2026, infostealers move at a speed and scale that checkbox monitoring solutions were never designed to handle.

Treating breach monitoring as a must-have program, instead of a one-off product, provides your enterprise with the visibility needed to view compromised credentials wherever they appear, the context to understand what those exposures mean, and the playbooks to automatically react when an attack is detected.  

Advertisement

To see how Lunar can help you find your organization’s compromised credentials, sign up for free access.

Sponsored and written by Lunar.

Source link

Advertisement
Continue Reading

Tech

Fascinating Look Back at the RCA Colortrak 2000, the CRT Television from 1982 Hidden Behind Glass

Published

on

1982 RCA Colortrak 2000 CRT TV
Photo credit: This-Profession-1680
Collectors frequently pause for a second when they see one in a thrift store or internet listing. A 1982 RCA Colortrak 2000 stands there with that 25-inch CRT screen behind a full tinted glass panel that swings open like a cabinet door, seeming almost like a piece of furniture at first glance. It protected the tube from dust and made darker scenes appear much more dramatic in well-lit spaces by reducing reflections.



RCA positioned the Colortrak 2000 as the top of their line in the early 1980s. The regular Colortrak sets were fairly reliable at the time, but the 2000 series went one step further by including a specialized comb filter. This item split color signals considerably cleanly than any other way, resulting in incredibly clean edges and minimal “bleeding” between colors on TV or recordings. They also included a light sensor near the screen. Cover it up in a bright area and the display will automatically decrease to reflect the surroundings, since it was quite futuristic stuff for its time, especially given most TVs. You just have a lot of manual knobs to mess with.

Sale


INSIGNIA 50″ Class F50 Series LED 4K UHD Smart Fire TV, Voice Remote with Alexa, Stream Live TV Without…
  • 4k Ultra HD (2160p resolution): Enjoy breathtaking HDR10 4K movies and TV shows at 4 times the resolution of Full HD, and upscale your current content…
  • High Dynamic Range: Provides a wide range of color details and sharper contrast, from the brightest whites to the deepest blacks.
  • All-in-one: Get right to your good stuff. With Fire TV, you can enjoy a world of entertainment from apps like Prime Video, Netflix, Disney+, Hulu, and…

People enjoyed the entire cabinet as much as the hardware itself. Acacia veneer and several good hardwoods received a warm golden finish, polished to a wonderful gloss that blended perfectly with the living room decor. The speakers were located at the bottom of the cabinet, on a glossy chrome base, and featured a dual dimension audio arrangement with two nine-inch oval drivers and a built-in amplifier. RCA even touted it as stereo-ready before stereo broadcasting became popular, and you could connect other sources via audio jacks and tune the bass and treble independently, which was unusual in ordinary sets at the time.

Advertisement

1982 RCA Colortrak 2000 CRT TV
The controls were concealed until you needed them, as a smoked acrylic door moved to one side to display all of the buttons for turning on and off the power, selecting channels, and making picture modifications such as sharpness and hue. There are no large knobs jutting out anywhere. The original remote, later renamed as the Digital Command Center, was compatible with the set and could also control other components in a comprehensive RCA system. A minor but neat feature, since you can push the remote and it will switch on the TV without the need for a separate power button, just a little indication at how RCA conceived of these sets as part of a comprehensive home entertainment package.

1982 RCA Colortrak 2000 CRT TV
The early models, like this 1982 example, only had RF connectors. Cable-ready tuning supported 127 channels, and a super ACU filter ensured color consistency among stations. Later, certain 2000 model Colortraks added composite inputs and even S-video plugs to the back, which could be accessed by tuning to a specific non-broadcast channel. This made the sets extremely useful even after they were brand new, particularly for plugging in gaming consoles or VCRs without the need for a variety of adapters. Yes, there were a few uncommon types with BNC connectors, an homage to all the professional video equipment that most people never saw.

Source link

Continue Reading

Tech

OpenAI calls for robot taxes, a public wealth fund, and a four-day week

Published

on

Sam Altman’s 13-page policy blueprint, ‘Industrial Policy for the Intelligence Age,’ proposes auto-triggering safety nets, containment playbooks for rogue AI, and direct citizen dividends from AI-driven growth. He told Axios it is a starting point, not a prescription.


OpenAI has published a 13-page policy document calling for sweeping economic reforms to prepare for what it describes as approaching superintelligence, including taxes on automated labour, a national public wealth fund seeded partly by AI companies, and pilots of a 32-hour working week.

The document, titled ‘Industrial Policy for the Intelligence Age: Ideas to keep people first,‘ was released as Congress prepares to debate AI legislation. CEO Sam Altman told Axios in an exclusive interview that the scale of change coming from AI is comparable to the Progressive Era and the New Deal, and that the two most immediate dangers are cyberattacks and biological weapons capable of being enabled by advanced AI.

The most radical proposal in the document is the public wealth fund. OpenAI suggests the government create a nationally managed fund, seeded in part by contributions from AI companies themselves, that would invest in AI firms and other businesses adopting the technology and distribute returns directly to American citizens.

Advertisement

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

The model is comparable to Alaska’s Permanent Fund, which pays annual dividends to state residents from oil revenues.

On labour, the document floats taxes on automated labour and a shift in the tax base from payroll towards capital gains and corporate income, an acknowledgement that AI could hollow out the wage-and-payroll revenue that currently funds Social Security.

Advertisement

The 32-hour workweek proposal is framed as an ‘efficiency dividend’ from AI-driven productivity gains.

The document includes a section on what it calls ‘containment playbooks’ for scenarios in which dangerous AI systems become autonomous and capable of replicating themselves. OpenAI acknowledges scenarios where such systems ‘cannot be easily recalled,’ and proposes government co-ordination as the response.

The blueprint also envisions automatic safety net triggers: when AI-driven displacement metrics hit preset thresholds, benefits including unemployment payments and wage insurance would increase automatically, then phase out when conditions stabilise.

Altman told Axios that a major cyberattack enabled by near-future AI models is ‘totally possible’ within the next year, and that AI models being used to create novel pathogens is ‘no longer theoretical.’

Advertisement

Altman was candid with Axios about the dual nature of the document. OpenAI is the company racing to build the very technology it is warning about, and positioning itself as the responsible actor proposing solutions is plainly also a strategy to shape regulation before regulation shapes it. Anthropic has occupied a similar lane.

The policy paper arrives at a moment when OpenAI is preparing for an IPO, has closed a $110 billion private funding round, and is simultaneously under scrutiny over its conversion from non-profit.

Whether the altruism is genuine or strategic, Altman told ‘Some will be good. Some will be bad. But we do feel a sense of urgency. And we want to see the debate of these issues really start to happen with seriousness.’

Advertisement

Source link

Continue Reading

Tech

Supreme Court Shrugs Off Opportunity To Save The First Amendment From The Fifth Circuit’s Antipathy

Published

on

from the rights-are-for-people-who-never-need-to-invoke-them-I-guess dept

The Supreme Court’s latest recap of its relative inactivity (Trump administration “emergency” appeals aside) has delivered yet more evidence of this court’s indifference to rights violations committed by the government. Other cases involving alleged rights violations that should have — at the very least — been handed over to jury for further consideration were tacitly blessed by the top court in the land by its refusal to grant certori.

This one — involving the retaliatory arrest of an independent journalists by cops who didn’t like her reporting — is yet another miscarriage of justice by a Supreme Court whose majority simply won’t take cases that might force it to hold the government accountable for its actions.

This case has bounced up and down the judicial ladder for more than a half-decade. Laredo, Texas native/independent journalist Priscilla Villarreal has been live streaming and reporting via Facebook under the name “Lagordiloca” for several years. Laredo PD officers don’t like her because she asks them questions they don’t like answering and films them when they’re performing traffic stops and arrests.

After Villarreal published information about a Border Patrol officer who had committed suicide, the Laredo PD worked with local prosecutors to have her arrested. All Villarreal had done was ask a PD employee to confirm information she’d already obtained. The PD responded by opening an internal investigation to oust the employee that had responded to Villarreal’s queries. Then it decided the only way for justice to be done was to arrest the person who had merely received confirmation (via a law enforcement employee) she already had in her possession.

Advertisement

Prosecutors claimed Villarreal’s acquisition and publication of this information violated a state law forbidding people from profiting from “misuse of official information.” To support this claim, the prosecutors claimed Facebook clicks were a form of “profit.” To date, no other citizen has ever been prosecuted under this law that was clearly written to prevent government employees from profiting from information only government employees might have access to.

The local judge immediately tossed the bullshit charges immediately after they were presented to her in court. Somehow, the district court managed to look past the obvious First Amendment violations to give the officers immunity. The Fifth Circuit’s first pass reversed this, with Judge Ho making it clear there’s no way any reasonable officer would have thought arresting a journalist simply for asking questions didn’t violate the Constitution.

This is not just an obvious constitutional infringement—it’s hard to imagine a more textbook violation of the First Amendment.

Then things got weird. A couple of judges in the minority thought this shouldn’t stand and started making noise. The Fifth Circuit agreed to an en banc hearing and reissued this opinion with a new dissent written by Chief Judge Priscilla Richman, along with some additional commentary by Judge Ho about how far removed from sanity Richman’s dissent was.

Two years later, it handed down its second take. And the majority somehow came to the conclusion that it’s okay to engage in retaliatory arrests as long as you can find any criminal statute at all to support the arrest. According to Judge Jones, Villarreal should have either limited herself to official channels or challenged the law itself in court, rather than ask a government employee to verify information Villarreal already possessed.

Advertisement

This was appealed. Eight months later, the Supreme Court sent it back down to the Fifth Circuit for yet another pass, instructing it to apply the Trevino standard. That standard is fairly simple: if a law is rarely, if ever, enforced but somehow shows up conveniently to do the cops’ dirty work when they want to retaliate against a person they don’t like, there’s a good chance this selective application is an established violation of rights. In this case, prosecutors had never used this law to charge anyone ever.

The Fifth Circuit’s third pass — again written by Judge Edith Jones — said the Trevino factor just didn’t matter. If the law was on the books (even if it had never been enforced), it was justification enough for the arrest. And even if that arrest violated the Constitution, the officers should still be given qualified immunity because how could they have known that arresting the only person ever charged with this crime in its 23 years of existence might somehow be unconstitutionally retaliatory?

Now that we’re caught up, this is how it ends for Priscilla Villarreal:

The petition for a writ of certiorari is denied.

There’s a dissent written by Justice Sotomayor that’s even lengthier than my preamble. It’s worth reading, though, and it starts with this admonishment of the majority’s refusal to write this obvious wrong:

Advertisement

It should be obvious that this arrest violated the First Amendment. Yet the Fifth Circuit held that the officials were entitled to qualified immunity, and now Villarreal is left without a remedy. The Court today makes a grave error by declining to hear this case.

The nation’s top court has decided the Laredo PD and local prosecutors can walk away cleanly from a series of extremely obvious rights violations. And in doing so, it emboldens them (and others) to engage in future retaliatory arrests of journalists they don’t like.

The Supreme Court majority is apparently willing to pretend rights don’t exist when it’s convenient to do so, just like the officers whose actions it tacitly blesses with this particular inaction. Sotomayor drills down on this, rubbing the majority’s nose in its deliberate dismissal of constitutional rights:

[T]he Fifth Circuit found that the officials reasonably believed that they had probable cause to arrest Villarreal for violating §39.06(c). Id., at 385–390. Not so. Just like an individual cannot be convicted of a crime for engaging in First Amendment activity, it is axiomatic that a probable cause determination cannot be based on such protected activity either.

[…]

It necessarily follows that when an arrest is based on protected First Amendment activity, that activity cannot constitute probable cause and support adverse police action. All reasonable officers know this.

Advertisement

[…]

Here, it is hard to conceive of a more obvious constitutional violation than arresting a journalist who, in searching for corroboration, simply asks a government source for information. That is the essence of many journalists’ jobs. The arrest does not somehow become reasonable, and constitutional, merely because an unconstitutional application of a statute authorizes it.

All we have is the dissent. All Villarreal has is knowledge Laredo PD officers and local prosecutors will be digging through the state statutes to find something else to charge her with the next time her reporting pisses them off. The Supreme Court issued a short, clear instruction to the Fifth Circuit, telling it to apply a specific legal standard. Instead, the Fifth Circuit — led by the consistently awful Judge Edith Jones – sidestepped this instruction on its way towards granting the officers qualified immunity. And that deliberate refusal to engage with the Supreme Court’s specific instructions has now been ignored by the same court that strongly hinted the Fifth Circuit got this wrong. It’s a shrug that lets the general public know exactly where it stands: at the bottom of the national organization chart with no layers of protection between them and government officials who seek to do them harm.

Advertisement

Filed Under: 1st amendment, 4th amendment, 5th circuit, laredo pd, police misconduct, priscilla villarreal, qualified immunity, retaliation, supreme court

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025