No matter how much we don’t like and oppose it, personal data is now a commodity. Our phone numbers, addresses, shopping habits, or employment history details are collected, analyzed, and traded among data brokers, marketers, recruiters, insurers, and countless other buyers, not to mention frauds and thieves.
However, trying to remove your online presence manually means tracking down every single company that holds your data (which can be hundreds), submitting legal deletion requests, and repeating the process when your data reappears or your request is ignored. This can easily become a full-time job.
That’s why data broker removal services exist: to automate, manage, and repeat those requests on your behalf.
But how to choose the best provider? Below, you will find a 2026 evaluation of the most recognized names in the industry.
Advertisement
Top Data Broker Removal Services at a Glance
Category
Incogni
Aura
DeleteMe
Optery
OneRep
Pricing (monthly when billed annually)
From $7.99
From $9.99
From $6.97
From $3.25
From $8.33
Free option
30-day money-back guarantee
14-day free trial, 60-day money-back guarantee
Free scan
Basicself-service, 30-day money-back guarantee
5-day trial, 30-day money-back guarantee
Automation Level
High
Medium-High
Medium-Low
Medium
Medium-High
Broker coverage
420+ public and private brokers
200+ brokers, mainly private
up to 850+ brokers (varies by plan), mainly public
120-640+sites (varies by plan)
310+ sites, mainly public
Verification
Dashboard, Deloitte Limited Assurance Report
App alerts and screenshots
Quarterly reports and screenshots
Screenshots and exposure scans
Dashboard and monthly reports
Best for
Long-term, low-effort privacy
Identity + privacy bundle
Detailed proof and control
Data exposure prediction
Public removals, Families
Incogni: Best for Balanced Automation, Coverage, and Accountability
Overview and Pricing
Incogni focuses on the continuous removal of personal data from data brokers, including both public people-search sites and private commercial databases.
Incogni’s plans start at $7.99/month when billed annually, and even the basic option contains all you need for effective data removal. Higher-tier plans only change prioritization and scope. There’s no free option, but you can take advantage of its 30-day money-back guarantee to see if the service suits your needs.
Features
Fully-automated opt-out and deletion requests across 420+ data brokers
Recurring removal cycles: 60 days for public, 90 days for private brokers
Supported by Deloitte’s limited assurance assessment, Incogni officially reports that it has processed 245+ million removal requests from 2022 to mid-2025, indicating sustained operations rather than one-time cleanups. As data brokers can reacquire information and their databases refresh regularly, the recurring cycle is vital if you want to protect your online footprint in the long run.
Transparency and Reputation
Apart from a limited assurance report by Deloitte, the service also holds Editors’ Choice Awards from PCMag and PCWorld, which praise its automation system and wide coverage.
On Trustpilot, Incogni has generally positive feedback, with an average rating of 4.4 based on over 2,000 reviews. Users often note actual reductions in spam and visible listings.
Advertisement
User Experience
Once you set up your account, you need to verify your identity. After that, Incogni will handle most data removal activity in the background without involving you directly. The clear, straightforward dashboard will show you all the brokers Incogni has contacted, confirmed removals, responses, and next scheduled cycles. You can peek into it whenever you like, but you don’t have to engage to make the process effective.
Advantages
Disadvantages
High automation
No screenshots
Broad coverage
No free trial
Deloitte Limited Assurance Report
Basic reporting
30-day money-back guarantee
Phone support only on Unlimited plans
Industry recognition
Recurring cycles and resubmitted requests
Clear interface, straightforward user experience
Aura: Best All-in-One Identity and Privacy Suite
Overview and Pricing
Aura is not a provider like others on this list, as it combines data removal service with broader digital protection features, including credit alerts, antivirus, VPN, device security, and identity theft monitoring.
Aura’s prices begin at $9.99/month when billed annually. What’s more, you get a 14-day free trial and a 60-day money-back guarantee for risk-free testing.
Features
Automated data removal across 200+ data brokers (mainly private)
Identity theft monitoring
Dark web monitoring
Credit score and breach alerts
Antivirus/anti-malware protection
VPN
Family and multi-device plans
Effectiveness
When it comes to data removal itself, this Aura functionality is automated. The platform first scans broker and people-search sites, submits deletion requests whenever finding your
information, and re-checks for reappearances. However, as it’s not its main focus, its data removal coverage is quite narrow compared to dedicated solutions. Aura’s value is the strongest only if combined with the whole toolkit.
Advertisement
Transparency and Reputation
Aura has been widely described in the identity protection space with overall positive sentiments. You can find Aura reviews on PCMag, Forbes, and NerdWallet. On Trustpilot, it holds an average rating of 4.2 based on almost 1,000 reviews. Users appreciate its all-in-one service, but broker removal results themselves don’t match those ensured by services focused exclusively on that problem.
User Experience
Aura’s interface contains all the features offered by the providers, showing alerts, scans, security postures, removal status, and more. This holistic view appeals to people who seek central management of their online presence, but for many users, it can be overwhelming.
Advantages
Disadvantages
Privacy+security bundle
Narrower coverage
Insurance
Manual approval steps
60-day money-back guarantee
Overwhelming user experience
14-day free trial
No third-party verification
Comprehensive alerts
DeleteMe: Strong for Proved Public People-Search Listing Deletion
Overview and Pricing
DeleteMe focuses on public people-search sites and background information databases. These are mentions that usually appear in search results when someone Googles your name.
The cheapest DeleteMe plan is $6.97/month when billed annually and can be used by 1 person.
Advertisement
Features
Automated scans of people-search sites (up to 850+, depending on the plan)
Expert manual handling
Quarterly detailed reports
Coverage for individuals, couples, and families
Limited custom removal requests (40-60 per year, plan-dependent)
DIY opt-out tutorials
Effectiveness
DeleteMe is quite effective at removing visible information from many major public listings. The company was a pioneer when, in 2010, it entered the industry with its
part-automatic, part-human-assisted approach. The team submits requests and tracks
progress, then provides you with scheduled, detailed reports that include, for example, even screenshots.
Transparency and Reputation
DeleteMe has been in the industry since 2010, which says a lot about its reliability. It has generally positive user reviews, especially when it comes to its detailed reporting system and exhaustive explanations about what was removed. There have been no third-party assessments of its services, but the provider has a good reputation in the industry, as seen in the review in PCMag or praise from Forbes. When it comes to user feedback, it has a rating of
4.0 on Trustpilot, though based only on 180+ reviews.
Advertisement
User Experience
Contrary to Incogni’s live and always-on progress monitoring, which you can check but don’t have to, DeleteMe is more report-centric. Users receive quarterly PDF summaries that show what sites were contacted, where their information was removed, and what remains pending.
Many people appreciate their human approach.
Advantages
Disadvantages
Clear, detailed reporting
Slower cycles
Long-standing service
Less automation
Human expertise
Narrower broker reach
30-day money-back guarantee
US-mainly coverage
Optery: Best for Exposure Visibility
Overview and Pricing
Optery’s main field of expertise is discovering where your personal data exists, providing users with insight into exposures before and during removal attempts.
Optery’s offer starts at $3.25/month when billed annually. The company also has a free, self-service version. Apart from that, you get a free scan and a 30-day money-back guarantee.
Advertisement
Features
Exposure dashboard displaying where your personal data exists
Automated removal from up to 630+ brokers with paid plans
Initial free scan across 120+ sites and free self-service plan
Guided removal request sending process
Custom removal submissions
Manual tracking of opt-outs and their status
Effectiveness
Optery is most effective at identifying where your personal data has been exposed. Then, for its removal, it blends automatic attempts with user-guided actions and manual tracking.
It doesn’t have the same automated recurring cycles as, for example, Incogni, but it may be helpful if you want to truly understand data exposures.
Transparency and Reputation
Optery is often highlighted for its exposure insights and transparency. Users appreciate the “seeing where my data lives” model, but many note that broader coverage comes only with more expensive plans, while manual user input is still needed.
On Trustpilot, Optery has 171 reviews with an average rating of 4.1. It has also been reviewed by PCMagquite enthusiastically, though they mentioned that the service doesn’t distinguish between removed data and never-found data. TechRadarpraised it for its ease of use.
User Experience
Optery is more interactive and gives you more control of the process (which can be both an advantage and a disadvantage, depending on how much time you’re willing to sacrifice). Its dashboard clearly shows where your personal data is, and then you need to decide which removals are more important and what to do next. You also get before and after screenshots as visual proof, while reports are AI-improved to make them more accurate and detailed.
Advertisement
Advantages
Disadvantages
Free scan
Broader coverage with more expensive plans
Free self-service
US-focused
30-day money-back guarantee
Slower with cheaper plans
Clear interface & control
No phone support
Onerep: Best for Public Listing Removal and Families
What It Does
OneRep automates removal requests issued to public people-search sites. Its focus is on high-risk databases like Intelius and Whitepages. The service also ensures quarterly recurring checks to combat resurfacing of your data.
However, there’s significant controversy around the company (more of that below).
Onerep’s prices start at $8.33/month when billed annually. It also offers a 5-day free trial. What makes it attractive and more affordable is its family plans that cover up to 6 members.
Features
Automated scans and removal requests across 310+ data brokers
Quarterly re-scanning
Great family value
Clear and straightforward dashboard tracking
Effectiveness
Optery is effective when it comes to reducing online visibility on many public sites, including those deemed high-risk. However, this provider doesn’t focus on private commercial
brokers that are responsible for a large portion of the spam. It makes Optery’s reach much narrower.
Advertisement
Transparency and Reputation
OneRep has a mixed reputation in the privacy protection community.
User reviews vary: some praise successful public listing removals, while others complain about slow relisting or only partial effects. Still, it holds a quite impressive average rating of
However, it’s essential to know that Krebs on Security revealed that in March 2024, Mozilladecided to drop OneRep from its list of recommendations due to the company’s CEO’s involvement in running people-search networks. This raised serious questions about conflict of interest in the industry. While the provider stated that Onerep operates completely independently and never sells user information, it is still often referenced in privacy circles.
Advertisement
User Experience
Onerep’s dashboard is pretty simple to manage. It shows progress on targeted sites and all removal requests, though it’s not really an automated model, so it only suits users who don’t mind handling the process.
Advantages
Disadvantages
Great family value
Industry controversies
5-day free trial
30-day money-back guarantee highly conditioned
Quarterly re-scans
US focus
Public listing coverage
Little customization
No third-party verification
Narrower scope
Final Perspective for 2026
When it comes to choosing a data removal service, the main difference is usually in scope and depth. Some providers focus on visible people-search listings, while others dig deeper to find your personal information in harder-to-find databases. They also vary in the recurring cycles they offer (or not).
Managing your overall online visibility is vital, but if you really want to reduce the amount of your information circulating on the web, you need to focus on less visible broker networks. Or rather, choose a provider built around large-scale broker coverage. Only then will you be able to enjoy more sustained results.
In 2026, Incogni stands out among its competition, as it combines a wide broker reach, continuous removal cycles, and a streamlined, low-maintenance experience. Not to mention that it was independently assessed. While other providers are not to be altogether dismissed, Incogni’s focused, automated approach offers the most comprehensive way out.
Advertisement
FAQ
Why can’t I just remove my data from brokers myself?
Manual removal means identifying hundreds of brokers, submitting individual opt-out requests, repeatedly verifying your identity, and rechecking when your data reappears. For most people, that quickly becomes too time-consuming to manage consistently.
How often does my data reappear after removal?
Advertisement
Data brokers regularly refresh and repurchase data, which means listings can resurface even after deletion. That’s why recurring removal cycles are critical for long-term results.
What’s the difference between public and private data brokers?
Public brokers (like people-search sites) display your information in search results, while private brokers trade data behind the scenes with marketers, insurers, and other businesses. Private databases often contribute more to spam and profiling, even if you don’t see them.
Advertisement
Do all services provide proof that removals were completed?
No. Some providers offer screenshots or quarterly reports, while others rely on dashboards or summary updates. The level of transparency varies significantly by service.
Advertisement
Is a bundled identity protection service enough for data removal?
All-in-one tools can help, but their broker coverage is often narrower than services dedicated specifically to data removal. If reducing online exposure is your main goal, specialized coverage may deliver stronger results.
Satellites on Fire, founded in 2020 as a school project by three Argentine teenagers, has closed a seed round led by Dalus Capital. Its software-only platform integrates satellite data from multiple agencies and detects fires faster than NASA’s FIRMS system by avoiding the gaps between satellite passes.
Argentine climate-tech startup Satellites on Fire has closed a $2.7 million seed round led by Dalus Capital, with participation from Draper Associates, Draper Cygnus, VitaminC, Savia Ventures, Avesta Fund, Reciprocal, Zenani Capital, Innventure, Air Capital, Gain VC, Antom VC, and Embarca Tech.
The company builds an AI-powered wildfire detection platform that integrates satellite imagery, tower cameras, fire propagation modelling, and real-time alerts, and says its system detects fires on average 35 minutes ahead of NASA’s FIRMS service.
The company was founded in 2020 by Franco Rodriguez Viau, Ulises López Pacholczak, and Joaquín Chamo, then secondary school students at ORT Buenos Aires, after family friends of Rodriguez Viau lost their homes to wildfires in Córdoba.
Advertisement
What began as a school project was rebuilt from scratch after the founders interviewed more than 80 firefighters and emergency responders and concluded their first version was not operationally useful. Rodriguez Viau is now 22 and serves as CEO.
MIT Technology Review’s Spanish edition named him among its 35 Innovators Under 35 for Latin America in 2025.
The platform’s edge over existing systems lies in satellite coverage density. NASA’s FIRMS service draws on a smaller number of satellites with revisit intervals that can leave multi-hour gaps over Latin American territories.
Satellites on Fire aggregates imagery from more than eight satellites across NASA, NOAA, and the European Space Agency, updated as frequently as every five minutes, and applies its own AI models to detect heat signatures and generate spread simulations.
Advertisement
The result, the company says, is detection that consistently precedes NASA alerts by around 35 minutes, which it describes as the critical window for effective early containment. Newsweek reported in November 2025 a documented case in Argentina where the system detected a fire at 1:40 a.m., seven hours ahead of NASA’s alert.
The commercial model is software-as-a-service, with pricing ranging from $0.02 to $10 per hectare annually depending on service tier. The platform currently monitors territory across 21 countries on four continents, with more than 55,000 users and a training dataset built from over 20,000 field-validated fire reports, which the company describes as the largest such database in Latin America.
In 2025, the system was involved in the response to more than 600 wildfires, according to the company. Clients include forestry companies, agricultural enterprises, energy utilities, carbon credit projects, insurers, and government agencies. Aon has integrated the platform into all of its forestry insurance policies across Latin America for risk calculation and premium pricing.
The new capital will fund expansion into the United States market, where the company is already running pilots and has a partnership with Watch Duty, the non-profit wildfire tracking platform.
Advertisement
It will also be used to optimise AI models, launch a parametric wildfire insurance product in partnership with Aon, and build an intelligence dashboard for client protection planning.
Rodriguez Viau has previously said the company intends to eventually move into suppression technology using drones. The US is the primary new target: wildfires are estimated to cost the country hundreds of billions of dollars annually, and the 2025 Los Angeles fires sharpened political and commercial attention on detection gaps.
John Mills, CEO of Watch Duty and an advisor to Satellites on Fire, said the platform’s results with existing satellite data had ‘genuinely astounded’ his team. Diego Serebrisky, co-founder and managing partner at lead investor Dalus Capital, framed the round as evidence that Latin American founders are producing globally competitive AI solutions in climate.
The company previously received $250,000 from Tim Draper and Adam Draper after appearing on Meet the Drapers Season 9, and has also received recognition from the UN and support from MIT and Cornell University at earlier stages.
Pre-Series A founders and anyone who knows a startup worth funding, this is your reminder. Nominations for Startup Battlefield 200 are open, and the strongest contenders are already stepping forward. If your startup was nominated, don’t stop there. Submit your application today.
This is not just another pitch opportunity. You are stepping onto the main stage in front of 10,000+ attendees, top-tier VCs, and the global TechCrunch audience at TechCrunch Disrupt 2026. You are competing, getting live feedback from top VCs, and proving your company belongs.
If you have been thinking about applying or nominating a startup, waiting is the fastest way to miss out. Founders who move early gain the edge with more time to prepare, more visibility, and a stronger shot at standing out to the TechCrunch editorial team. Make your nomination and finish the submission by applying.Image Credits:TechCrunch
Which startups should apply?
We’re looking for early-stage startups building ambitious, innovative, and potentially category-defining products. We accept applications globally, across all industries. Most selected companies are pre-Series A, with some Series A considered on a case-by-case basis. A functional minimum viable product (MVP) and a clear product demo are required. Above all, we back strong founders and ideas with real impact.
Each year, thousands apply. 200 are selected to participate. 20 reach the final round to pitch live on the Disrupt Stage. Only one champion wins. Learn more and apply here.Image Credits:Kimberly White / Getty Images
What selected startups get
Global exposure across TechCrunch’s audience
Free exhibit table for all 3 days
4 all-access Disrupt passes
Featured startup profile in the event app
Press list access and lead generation opportunities
Exclusive founder masterclasses
A chance to pitch live on the Disrupt Stage
Direct feedback from top VCs
A shot at $100,000 in equity-free funding
Apply for Startup Battlefield 200 today
Applications close May 27, but the founders who win do not wait. They move early and take their shot before the competition catches up.
Techcrunch event
San Francisco, CA | October 13-15, 2026
Advertisement
If you are building something that could define a category, or know a founder who is, this is your moment. Nominate your startup or one that belongs in the arena. If nominated, submit your application. Don’t sit on the sidelines and miss your shot.
These days our appetite for more data storage is larger than ever, with video files larger, photo resolutions higher, and project files easily zipping past a few hundred MB. At the same time our options for data storage are becoming more and more limited. For the longest time we could count on there always being a newer, roomier, faster, and cheaper form of storage to come along, but those days would seem to be over.
We can look back and laugh at low capacity USB Flash drives of the early 2000s, yet the first storage drive to hit 1 TB capacity did so in 2007, with a Hitachi Deskstar 7k100, only for that level of capacity in PCs to not really be exceeded nineteen years later.
We also had Blu-ray discs (BD) promise to cram the equivalent of dozens of DVDs onto a single BD, with two- and even four-layer BDs storing up to a one-hundred-and-twenty-eight GB. Yet today optical media is dying a slow death as the sole remaining cheap storage option. NAND Flash storage has only increased in price, and the options for those of us who have large cold storage requirements would seem to be decreasing every day.
So what is the economical solution here? Invest in LTO tapes using commercial left-overs, or give up and sign up for Cloud Storage™ for the low-low price of a monthly recurring fee?
Advertisement
It’s Not Hoarding, I Swear
Although there are many people today who use just a lightweight laptop with something like 256 GB of storage in it without any complaints, the problem would seem to lie mostly with those who are really into having local and offline data. This can include things like multimedia content, but also project files and resources, which especially in the case of video editing and game development can quickly balloon into pretty serious size requirements.
Over the decades of memory storage, there’s been a near-constant flurry of new innovations and technologies, always with the knowledge that in a decade there would be massively larger forms of storage or at least big price drops to look forward to. This is how my first late 90s PC didn’t have just a zippy Celeron 400 CPU, but also a massive 4 GB HDD.
Compared to the 30 MB HDD in the 386-based system that I had before this was massive, but with multimedia content flooding in courtesy of the filesharing revolution, I quickly had to pop in a 10 GB HDD. By the time that I upgraded to a new PC that was considered small, and I found myself well above 20 GB, before soon joining the 1 TB and later the 5+ TB club. By 2012 HDDs were using 2 TB platters, so this was basically becoming unavoidable.
Meanwhile a lot of files were offloaded or backed up onto optical media, both CDs and DVDs. Although ZIP disks also briefly made an appearance in my PCs, optical discs were simply far cheaper and more universally usable.
The Problem
Sony was the only manufacturer of 128GB writable Blu-Ray discs. Other manufacturers topped out at 100GB.
Even before the current ‘AI’ datacenter-induced tripling of Flash storage costs that’s also affecting USB Flash drives and HDDs, optical media had been slowly phased out for a while. Even without checking sales numbers, you don’t have to be a genius to consider it a bad sign that manufacturers like Pioneer are exiting the optical storage market and big names like Sony ceased the production of recordable Blu-ray discs along with MiniDisc and MiniDV formats.
If you have recently shopped around for internal 5.25″ optical disc drives (ODDs) in particular, you may have noticed that these are becoming increasingly more rare and expensive. Even stand-alone BD player options are becoming more limited, with that distinct ‘final gasp’ vibe that comes with a dying format, as with VHS and kin in the past.
Advertisement
Part of the problem can probably be attributed to the move away by content distributors – including multimedia, games and software – from physical media to online distribution methods. This takes the form of streaming services, Software-as-a-Service (SaaS), and online game stores. At this point you do not even need a BD player in your home game console, never mind PC, to install games. Neither do you need a BD player connected to your smart TV as you can just join the new brave world of terminal-based subscriptions.
So with demand for optical media massively reduced by this shift, what’s left for those of us who just want to back up our data in peace and without shelling out too much hard-earned cash?
The Future
With the prospect of cheap DVD and BD blanks becoming a thing of the past, or unusable due to a lack of new optical drives to use them with, what options remain? We can look at metrics like cost per GB to see what might conceivably make sense.
The most recent 50-disc spindle of DVD+Rs that I purchased came in at just under €15 for 235 GB, so that’s about 6 cents/GB, and I could have gotten it much cheaper by going for larger spindles and shopping around some. For comparison, SSD storage was more than triple that even before the recent price surge, and HDDs are coming in around that same price tag as well.
A quick look at LTO tapes and drives for sale shows that while tapes for the older LTO-8 standard from 2017 are pretty reasonable, the drives cost an absolute fortune, so you’d have to be pretty lucky to score one without having to pawn off a kidney.
Advertisement
Add to this that LTO tapes are only really guaranteed for a lifespan of 15-30 years and are incomparably slower due to being a linear format. This makes tape storage only really suitable for the coldest of cold storage, and not for keeping some videos around, or for game development resources that you would like to pop in and quickly query without dying from old age while a tape seeks to the appropriate position.
The Solution?
Even assuming that the current insane surge in pricing for RAM, NAND Flash, and even HDD storage is just a temporary blip, and that by the time 2027 rolls around the RAMpocalypse will just be a bad dream to meme about, the basic economics of cost per storage would still not have changed in any measurable way.
The advantage of optical media, especially DVDs, is that they’re a very simple technology, relatively speaking. While there is some impressive technology in the optical pick-up component of an ODD, over the decades they have become highly affordable commodity devices. Meanwhile the discs are very cheap to produce, being at their core just some plastic with a coating on which the bits are written, while being very durable if kept away from physical harm.
It’s also essentially guaranteed that a DVD+R or BD-R will not have its data altered, something which cannot be guaranteed with a USB Flash drive. Filesystem corruption and electrical issues may damage or even destroy the Flash drive.
Advertisement
Although it’s easy to say that one should just ‘stop hoarding data’ or subscribe to some cloud storage solution for potentially infinite money per GB, high latency and the possibility of data loss due to a datacenter issue, there are many arguments to be made in favor of keeping local, offline copies, and that this should be done on highly durable media. We just cannot be sure that optical media will remain an option in the future.
What is your take on this conundrum? How do you manage your storage needs in this modern era, and what are your plans for the future? Please feel free to sound off in the comments.
Ridge AI co-founders Jeffrey Heer and Ellie Fields. (Ridge AI Photo)
Ellie Fields and Jeffrey Heer know data visualization from the inside: Fields spent more than 12 years as a product and marketing leader at Tableau, and Heer is the University of Washington professor whose open-source tools are widely used for web-based visualization.
But even as they and their colleagues pushed the field forward, they couldn’t escape a similar conclusion: presenting and analyzing data on the web is basically still broken.
Their solution: Ridge AI, a Seattle-based startup that uses AI and browser-based technology to help software companies build and deploy interactive dashboards and data agents in hours instead of days or months, embedding them directly in their products for use by their customers.
The company calls its core product a “ridge” — a dashboard and a data agent that share a common data set, letting users get visual context from the dashboard and ask follow-up questions through the agent.
A sample ridge showing Washington state EV registration data alongside an AI data agent for follow-up questions. (Ridge AI Image)
Funding: Ridge AI is emerging from stealth Monday with $2.6 million in pre-seed funding led by Madrona. The Seattle venture capital firm’s investment was spearheaded by Managing Director Tim Porter and Venture Partner Mark Nelson, the former CEO of Tableau.
Advertisement
Joining in the funding is a roster of angel investors that reads like a who’s who of analytics, AI and data: Chris Stolte, Tableau co-founder and former CTO; Carlos Guestrin, co-founder of Turi and director of Stanford’s AI Lab; Adrien Treuille, founder of Streamlit; Elissa Fink, former Tableau CMO; and Jeff Hammerbacher, Cloudera founder, among others.
Target market: Although their technology could be applied broadly, Ridge AI is focusing specifically at the outset on serving software as a service (SaaS) companies, giving them a way to present rich, interactive analytics to the people and businesses that use their products.
In an interview, Fields said the need is especially acute when a SaaS company is trying to renew a customer’s contract. The product might be delivering real results, but if the people making the buying decision can’t see that in the data, the deal can be at risk.
“The CFO is going to be asking, is anyone even using this?” Fields said, calling it one of the use cases where Ridge AI’s technology could be of significant value to SaaS firms.
Advertisement
The pressure to prove this value has intensified amid the “SaaS-pocalypse,” as it’s known — as companies consolidate their software spending and the rise of custom AI-coded apps makes many of them question whether existing tools are worth keeping.
What they’re solving: Madrona’s Nelson said he experienced the larger problem during his time as CTO of Concur, where the company built an analytics product on top of IBM Cognos, giving customers the ability to glean insights into employee travel and spending.
It was important to the business, he said, but it was a pain to maintain, and it wasn’t in Concur’s core skillset. The problem persists for many SaaS companies to this day.
SaaS companies have historically had to choose between heavyweight business intelligence platforms like Tableau and Power BI, specialty embedded analytics tools, or building their own. Fields said none of those options was purpose-built for the problem Ridge is solving.
Advertisement
Founders: Ridge AI was co-founded by Fields, who serves as CEO, and Heer, chief scientist, who will continue as a UW professor in addition to working on the company.
Also on the team: Andy Caley, a founding engineer who previously worked at Tableau, and Fritz Lekschas, a founding research engineer with a Ph.D. from Harvard and more than 20 publications in data visualization.
From left, Madrona’s Tim Porter, Ridge AI CEO Ellie Fields, and Madrona’s Mark Nelson. (Madrona Photo)
Fields and Heer were introduced by Madrona’s Nelson and Porter. Nelson had known Fields since she worked for him at Tableau and he had separately kept in touch with Heer through his UW work. Porter, meanwhile, had gone to Stanford Business School with Fields.
“I can’t think of two people I like more, and would bet on more, than Jeff and Ellie,” Nelson said, describing the pairing as an example of what’s possible in Seattle’s tight-knit tech community.
Heer previously co-founded Trifacta, a data transformation company acquired by Alteryx in 2022. He and his academic collaborators have produced some of the most widely used open-source tools in data visualization, including Vega(-Lite), D3.js, and the Mosaic framework that serves as Ridge AI’s technical foundation.
Advertisement
Fields joined Tableau as its first product marketer and rose to senior vice president of product development over more than 12 years, spanning the company’s IPO and its acquisition by Salesforce. She went on to serve as chief product and engineering officer at SalesLoft, where she experienced firsthand the problem Ridge is now trying to solve.
Technology: Ridge runs in the user’s web browser rather than on a remote server, using Heer’s open-source Mosaic framework and an in-browser database called DuckDB. That architecture delivers near-instant interactivity and means the software company that embeds it doesn’t pay for cloud computing costs with every dashboard interaction.
On the creation side, AI agents handle the visualization design, so product managers can describe what they want in business terms rather than learning a specialized tool.
What’s next: Fields said Ridge AI plans to focus on its SaaS wedge for at least a couple of years before expanding, noting that the market has historically been under-served.
Advertisement
The company has been working with a small number of pilot customers, and is now inviting additional companies into a closed beta, accepting applications at ridgedata.ai.
Indoor wireless is hitting limits as more devices crowd the same spectrum. Streaming, video calls, and smart home gear are pushing networks harder while power use rises. A new class of laser chips offers a different path by moving data onto light.
Researchers built a chip-scale optical link that delivers ultra-fast indoor connections with lower energy use. Instead of broadcasting signals widely, it sends data through controlled infrared beams, opening more usable capacity while avoiding interference in dense spaces.
At the core is a chip with 25 microscopic lasers, each carrying its own stream. Working in parallel, they push throughput far beyond a single source. In testing, the setup reached more than 360 gigabits per second across a short indoor link.
The gains are not just speed. Power use drops significantly, offering a more efficient way to handle rising demand.
Advertisement
Laser array proves the speed
Performance comes from a 5 by 5 array of vertical-cavity surface-emitting lasers, each acting as its own high-speed channel.
Ono Kosuki / Pexels
In tests over two meters, individual lasers delivered about 13 to 19 gigabits per second. With 21 active channels, total throughput reached 362.7 gigabits per second, among the fastest chip-scale optical results so far.
The limit came from the receiver hardware, not the transmitter, suggesting higher speeds are possible with better components.
A custom optical setup also shapes each beam into a defined square, limiting overlap so multiple links can run side by side without interference.
Why light changes the equation
Radio networks struggle in crowded spaces where signals interfere and capacity gets stretched. Light avoids those limits by offering more bandwidth and precise control over where signals go.
Advertisement
Instead of blanketing a room, the system creates a grid of targeted beams with minimal spillover. Measurements show uniform coverage across the target area, helping maintain stable performance for multiple devices.
Getty Images
The setup runs at about 1.4 nanojoules per bit, roughly half that of comparable Wi-Fi systems. The tradeoff is range, as the current setup works over short distances and depends on line of sight.
Where this goes next
This approach is meant to complement existing networks by offloading heavy traffic in high-demand indoor spaces.
The hardware fits on a sub-millimeter chip built with standard processes, making integration into fixtures or access points plausible, though no commercial timeline is given.
As demand rises, combining radio and light-based links could become standard, with laser systems handling the heaviest traffic.
In 2026, stolen credentials are a top-tier security priority. They are also a paradox: even though they are considered a significant risk, enterprises still opt for checkbox solutions and generic tools to mitigate the problem.
According to a recent survey commissioned by Lunar, a dark-web monitoring platform powered by Webz.io, 85% of organizations rank stolen credentials as a high or very high risk, with 62% saying they are in their top-three security priorities.
At the same time, I’ve spoken with dozens of organizations using Lunar’s community platform, who have told me things like, “we have MFA everywhere, so we’re covered”, and “our EDR and zero-trust stack already protects our employees.”
Advertisement
They fail to realize that EDR and zero-trust measures offer no protection when an employee logs into a critical SaaS service from an unmanaged home device.
The consequences of failing to detect stolen credentials in time can be catastrophic. According to IBM’s Cost of a Data Breach Report, a breach involving compromised credentials costs between $4.81-4.88 million.
Considering that Lunar observed 4.17 billion compromised credentials in 2025 alone, the potential global cost of these attacks is staggering. All of this means that simple breach monitoring is no longer enough.
An enterprise mindset shift is needed to create a programmatic defense strategy that tackles the ever-evolving threat of infostealers.
Advertisement
Checkbox Monitoring and The Dangers of Using Generic Solutions
When speaking with organizations, I always ask how they mitigated the infostealer threat before onboarding Lunar. The answers I get follow the same pattern: Exposed credentials are a serious problem and we dedicated resources to solutions to mitigate the threat.
What they didn’t realize is that those solutions were lacking and mainly consisted of:
A focus on data breaches instead of infostealers
ULPs and non-forensic infostealer data
Advertisement
High latency and stale data sources
No automation, integrations, or investigation capabilities
Our research lays out just how serious the problem is. Only 32% of enterprises that we surveyed use dedicated credential monitoring solutions, while 17% have no tooling at all.
Meanwhile, more than 60% of organizations check for exposed credentials monthly, rarely, or not at all.
We’ve seen firsthand how these solutions perform. When new organizations onboard Lunar, many are shocked to realize that while their previous tools told them that a breach had happened, they never got the tools to properly investigate how it happened.
Advertisement
The forensic details, including the accounts that were compromised, the devices infected, the SaaS apps that could be impacted, not to mention the session cookies that were stolen, were simply not there.
While the checkbox approach is better than no security at all, it rarely provides the forensic detail that enterprises need to successfully mitigate the infostealer threat. So, what’s holding them back from scaling their operations?
See where your company’s credentials and session cookies are already exposed.
Lunar continuously monitors breaches and infostealer logs for your domains and surfaces actionable exposures in a free, enterprise‑grade dashboard.
The Infostealer Threat is Much Bigger Than Enterprises Think
This is where the infostealer paradox enters into our conversations. While everyone knows about the dangers of exposed credentials, they either fail to prioritize budgets or simply don’t know what kinds of solutions successfully mitigate the problem.
Furthermore, they don’t always understand just how prevalent credential theft actually is, the environments they target, and the information they can access.
From the 4.17 billion compromised-credential records we collected in 2025, we analyzed infostealer logs, stealer-derived combolists, marketplaces, and Telegram channels. Infostealers like LummaC2, Rhadamanthys, Vidar, Acreed, and others consistently slipped past enterprise monitoring, even in environments that considered themselves mature.
And while many new Lunar users thought that the macOS was safer than Windows, they were shocked to hear about families like Atomic macOS Stealer (AMOS), Odyssey, MacSync, MioLab, and Atlas.
Advertisement
There is also an awareness problem regarding the data infostealers exfiltrate, which goes far beyond simple username/password pairs. With modern infostealers now sold as full-fledged products, with subscription tiers, dashboards, and documentation tuned to harvesting cookies, session tokens, and SaaS access at scale, organizations are now in a rush to catch up and protect their networks.
For threat actors, session cookies don’t just provide access. They effectively open the front door, letting them skip login pages entirely: no password prompt, no MFA challenge, and often no obvious trace in standard authentication logs.
That is the piece of the puzzle that many organizations are only now internalizing.
What Does a Typical Infostealer Attack Look Like?
When we talk about what an infostealer attack looks like, and why checkbox security is ineffective, we often break it down into the following process:
Advertisement
Target is infected: The victim’s device is compromised by an infostealer delivered through vectors such as zero-day exploits, ClickFix campaigns, rogue browser extensions, unverified or pirated software, game mods, or malicious open-source projects.
Credentials are exfiltrated: The infostealer extracts the browser for logins and cookies, including those from third-party portals, and sends them back to the malware operator.
Credentials are bundled and sold: The stolen credentials are bundled into logs and sold on underground markets and private channels.
Attackers access the enterprise network: The attacker who purchases the logs accesses the target network, including third-party portals, using a valid session token.
This entire chain of events can be completed in hours. Meanwhile, many of the organizations we speak with run credential checks once a month or rely on outdated data.
Advertisement
By the time anything shows up in their legacy monitoring tools, attackers have had plenty of time to explore and exfiltrate whatever data they want.
Developing a Mature Breach Monitoring Program
A mature breach monitoring program, like Lunar, provides continuous monitoring, automations, and integrations
Organizations we work with that make the switch to a mature breach monitoring program have the tools they need to collect information from channels like stealer logs, Telegram groups, and marketplaces. Instead of relying on ad-hoc checks, they focus on three practical capabilities:
Continuous monitoring and normalization of key sources (breaches, stealer logs, combolists, marketplaces, and relevant channels), so security teams have a clear and deduplicated view of breach exposures.
Targeted automation that reduces false positives and noise, ensuring that analysts spend time on identities and sessions that actually matter.
Integrations into existing security and identity stacks (SIEM, SOAR, IDP) that execute playbooks end-to-end, resetting credentials, invalidating sessions, and blocking accounts as soon as exposures are confirmed.
Among Lunar users, we’ve seen a clear mindset shift once they get this right. They treat the infostealer threat as its own domain, complete with ownership, metrics, and playbooks, instead of managing their breach monitoring using unrelated tools.
Advertisement
This all goes back to Lunar’s core mission, which is to provide a free breach monitoring solution to any organization, regardless of budget, that delivers enterprise-grade coverage of compromised credentials, infostealers, and session cookies.
Our philosophy is to openly provide enriched compromised credential intelligence, enabling organizations to regain true visibility and resilience.
Redefining Breach Monitoring in 2026
Even seasoned and knowledgeable security teams can fall into the breach monitoring paradox, where they know the threat but behave as if monthly checks, MFA, and EDR are enough. But in 2026, infostealers move at a speed and scale that checkbox monitoring solutions were never designed to handle.
Treating breach monitoring as a must-have program, instead of a one-off product, provides your enterprise with the visibility needed to view compromised credentials wherever they appear, the context to understand what those exposures mean, and the playbooks to automatically react when an attack is detected.
Advertisement
To see how Lunar can help you find your organization’s compromised credentials, sign up for free access.
Photo credit: This-Profession-1680 Collectors frequently pause for a second when they see one in a thrift store or internet listing. A 1982 RCA Colortrak 2000 stands there with that 25-inch CRT screen behind a full tinted glass panel that swings open like a cabinet door, seeming almost like a piece of furniture at first glance. It protected the tube from dust and made darker scenes appear much more dramatic in well-lit spaces by reducing reflections.
RCA positioned the Colortrak 2000 as the top of their line in the early 1980s. The regular Colortrak sets were fairly reliable at the time, but the 2000 series went one step further by including a specialized comb filter. This item split color signals considerably cleanly than any other way, resulting in incredibly clean edges and minimal “bleeding” between colors on TV or recordings. They also included a light sensor near the screen. Cover it up in a bright area and the display will automatically decrease to reflect the surroundings, since it was quite futuristic stuff for its time, especially given most TVs. You just have a lot of manual knobs to mess with.
4k Ultra HD (2160p resolution): Enjoy breathtaking HDR10 4K movies and TV shows at 4 times the resolution of Full HD, and upscale your current content…
High Dynamic Range: Provides a wide range of color details and sharper contrast, from the brightest whites to the deepest blacks.
All-in-one: Get right to your good stuff. With Fire TV, you can enjoy a world of entertainment from apps like Prime Video, Netflix, Disney+, Hulu, and…
People enjoyed the entire cabinet as much as the hardware itself. Acacia veneer and several good hardwoods received a warm golden finish, polished to a wonderful gloss that blended perfectly with the living room decor. The speakers were located at the bottom of the cabinet, on a glossy chrome base, and featured a dual dimension audio arrangement with two nine-inch oval drivers and a built-in amplifier. RCA even touted it as stereo-ready before stereo broadcasting became popular, and you could connect other sources via audio jacks and tune the bass and treble independently, which was unusual in ordinary sets at the time.
Advertisement
The controls were concealed until you needed them, as a smoked acrylic door moved to one side to display all of the buttons for turning on and off the power, selecting channels, and making picture modifications such as sharpness and hue. There are no large knobs jutting out anywhere. The original remote, later renamed as the Digital Command Center, was compatible with the set and could also control other components in a comprehensive RCA system. A minor but neat feature, since you can push the remote and it will switch on the TV without the need for a separate power button, just a little indication at how RCA conceived of these sets as part of a comprehensive home entertainment package.
The early models, like this 1982 example, only had RF connectors. Cable-ready tuning supported 127 channels, and a super ACU filter ensured color consistency among stations. Later, certain 2000 model Colortraks added composite inputs and even S-video plugs to the back, which could be accessed by tuning to a specific non-broadcast channel. This made the sets extremely useful even after they were brand new, particularly for plugging in gaming consoles or VCRs without the need for a variety of adapters. Yes, there were a few uncommon types with BNC connectors, an homage to all the professional video equipment that most people never saw.
Sam Altman’s 13-page policy blueprint, ‘Industrial Policy for the Intelligence Age,’ proposes auto-triggering safety nets, containment playbooks for rogue AI, and direct citizen dividends from AI-driven growth. He told Axios it is a starting point, not a prescription.
OpenAI has published a 13-page policy document calling for sweeping economic reforms to prepare for what it describes as approaching superintelligence, including taxes on automated labour, a national public wealth fund seeded partly by AI companies, and pilots of a 32-hour working week.
The document, titled ‘Industrial Policy for the Intelligence Age: Ideas to keep people first,‘ was released as Congress prepares to debate AI legislation. CEO Sam Altman told Axios in an exclusive interview that the scale of change coming from AI is comparable to the Progressive Era and the New Deal, and that the two most immediate dangers are cyberattacks and biological weapons capable of being enabled by advanced AI.
The most radical proposal in the document is the public wealth fund. OpenAI suggests the government create a nationally managed fund, seeded in part by contributions from AI companies themselves, that would invest in AI firms and other businesses adopting the technology and distribute returns directly to American citizens.
Advertisement
The 💜 of EU tech
The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!
The model is comparable to Alaska’s Permanent Fund, which pays annual dividends to state residents from oil revenues.
On labour, the document floats taxes on automated labour and a shift in the tax base from payroll towards capital gains and corporate income, an acknowledgement that AI could hollow out the wage-and-payroll revenue that currently funds Social Security.
Advertisement
The 32-hour workweek proposal is framed as an ‘efficiency dividend’ from AI-driven productivity gains.
The document includes a section on what it calls ‘containment playbooks’ for scenarios in which dangerous AI systems become autonomous and capable of replicating themselves. OpenAI acknowledges scenarios where such systems ‘cannot be easily recalled,’ and proposes government co-ordination as the response.
The blueprint also envisions automatic safety net triggers: when AI-driven displacement metrics hit preset thresholds, benefits including unemployment payments and wage insurance would increase automatically, then phase out when conditions stabilise.
Altman told Axios that a major cyberattack enabled by near-future AI models is ‘totally possible’ within the next year, and that AI models being used to create novel pathogens is ‘no longer theoretical.’
Advertisement
Altman was candid with Axios about the dual nature of the document. OpenAI is the company racing to build the very technology it is warning about, and positioning itself as the responsible actor proposing solutions is plainly also a strategy to shape regulation before regulation shapes it. Anthropic has occupied a similar lane.
The policy paper arrives at a moment when OpenAI is preparing for an IPO, has closed a $110 billion private funding round, and is simultaneously under scrutiny over its conversion from non-profit.
Whether the altruism is genuine or strategic, Altman told ‘Some will be good. Some will be bad. But we do feel a sense of urgency. And we want to see the debate of these issues really start to happen with seriousness.’
from the rights-are-for-people-who-never-need-to-invoke-them-I-guess dept
The Supreme Court’s latest recap of its relative inactivity (Trump administration “emergency” appeals aside) has delivered yet more evidence of this court’s indifference to rights violations committed by the government. Other cases involving alleged rights violations that should have — at the very least — been handed over to jury for further consideration were tacitly blessed by the top court in the land by its refusal to grant certori.
This one — involving the retaliatory arrest of an independent journalists by cops who didn’t like her reporting — is yet another miscarriage of justice by a Supreme Court whose majority simply won’t take cases that might force it to hold the government accountable for its actions.
This case has bounced up and down the judicial ladder for more than a half-decade. Laredo, Texas native/independent journalist Priscilla Villarreal has been live streaming and reporting via Facebook under the name “Lagordiloca” for several years. Laredo PD officers don’t like her because she asks them questions they don’t like answering and films them when they’re performing traffic stops and arrests.
After Villarreal published information about a Border Patrol officer who had committed suicide, the Laredo PD worked with local prosecutors to have her arrested. All Villarreal had done was ask a PD employee to confirm information she’d already obtained. The PD responded by opening an internal investigation to oust the employee that had responded to Villarreal’s queries. Then it decided the only way for justice to be done was to arrest the person who had merely received confirmation (via a law enforcement employee) she already had in her possession.
Advertisement
Prosecutors claimed Villarreal’s acquisition and publication of this information violated a state law forbidding people from profiting from “misuse of official information.” To support this claim, the prosecutors claimed Facebook clicks were a form of “profit.” To date, no other citizen has ever been prosecuted under this law that was clearly written to prevent government employees from profiting from information only government employees might have access to.
The local judge immediately tossed the bullshit charges immediately after they were presented to her in court. Somehow, the district court managed to look past the obvious First Amendment violations to give the officers immunity. The Fifth Circuit’s first pass reversed this, with Judge Ho making it clear there’s no way any reasonable officer would have thought arresting a journalist simply for asking questions didn’t violate the Constitution.
This is not just an obvious constitutional infringement—it’s hard to imagine a more textbook violation of the First Amendment.
Then things got weird. A couple of judges in the minority thought this shouldn’t stand and started making noise. The Fifth Circuit agreed to an en banc hearing and reissued this opinion with a new dissent written by Chief Judge Priscilla Richman, along with some additional commentary by Judge Ho about how far removed from sanity Richman’s dissent was.
Two years later, it handed down its second take. And the majority somehow came to the conclusion that it’s okay to engage in retaliatory arrests as long as you can find any criminal statute at all to support the arrest. According to Judge Jones, Villarreal should have either limited herself to official channels or challenged the law itself in court, rather than ask a government employee to verify information Villarreal already possessed.
Advertisement
This was appealed. Eight months later, the Supreme Court sent it back down to the Fifth Circuit for yet another pass, instructing it to apply the Trevino standard. That standard is fairly simple: if a law is rarely, if ever, enforced but somehow shows up conveniently to do the cops’ dirty work when they want to retaliate against a person they don’t like, there’s a good chance this selective application is an established violation of rights. In this case, prosecutors had never used this law to charge anyoneever.
The Fifth Circuit’s third pass — again written by Judge Edith Jones — said the Trevino factor just didn’t matter. If the law was on the books (even if it had never been enforced), it was justification enough for the arrest. And even if that arrest violated the Constitution, the officers should still be given qualified immunity because how could they have known that arresting the only person ever charged with this crime in its 23 years of existence might somehow be unconstitutionally retaliatory?
There’s a dissent written by Justice Sotomayor that’s even lengthier than my preamble. It’s worth reading, though, and it starts with this admonishment of the majority’s refusal to write this obvious wrong:
Advertisement
It should be obvious that this arrest violated the First Amendment. Yet the Fifth Circuit held that the officials were entitled to qualified immunity, and now Villarreal is left without a remedy. The Court today makes a grave error by declining to hear this case.
The nation’s top court has decided the Laredo PD and local prosecutors can walk away cleanly from a series of extremely obvious rights violations. And in doing so, it emboldens them (and others) to engage in future retaliatory arrests of journalists they don’t like.
The Supreme Court majority is apparently willing to pretend rights don’t exist when it’s convenient to do so, just like the officers whose actions it tacitly blesses with this particular inaction. Sotomayor drills down on this, rubbing the majority’s nose in its deliberate dismissal of constitutional rights:
[T]he Fifth Circuit found that the officials reasonably believed that they had probable cause to arrest Villarreal for violating §39.06(c). Id., at 385–390. Not so. Just like an individual cannot be convicted of a crime for engaging in First Amendment activity, it is axiomatic that a probable cause determination cannot be based on such protected activity either.
[…]
It necessarily follows that when an arrest is based on protected First Amendment activity, that activity cannot constitute probable cause and support adverse police action. All reasonable officers know this.
Advertisement
[…]
Here, it is hard to conceive of a more obvious constitutional violation than arresting a journalist who, in searching for corroboration, simply asks a government source for information. That is the essence of many journalists’ jobs. The arrest does not somehow become reasonable, and constitutional, merely because an unconstitutional application of a statute authorizes it.
All we have is the dissent. All Villarreal has is knowledge Laredo PD officers and local prosecutors will be digging through the state statutes to find something else to charge her with the next time her reporting pisses them off. The Supreme Court issued a short, clear instruction to the Fifth Circuit, telling it to apply a specific legal standard. Instead, the Fifth Circuit — led by the consistently awful Judge Edith Jones – sidestepped this instruction on its way towards granting the officers qualified immunity. And that deliberate refusal to engage with the Supreme Court’s specific instructions has now been ignored by the same court that strongly hinted the Fifth Circuit got this wrong. It’s a shrug that lets the general public know exactly where it stands: at the bottom of the national organization chart with no layers of protection between them and government officials who seek to do them harm.
Baseus has launched the PicoGo Air in China, and it’s making a strong case for being the slimmest companion your phone could ask for. At just 6.9mm thin, this magnetic power bank is designed for people who want the extra juice without the bulk.
The PicoGo Air packs a 5,000mAh rated capacity, which is enough to give your phone a solid top-up when you need it the most. It supports 15W magnetic wireless charging and 22.5W wired fast charging, which is good enough for a slim power bank like this one.
What makes it more than a slim power bank?
Slimness is not the only selling point of the Baseus power bank. According to Notebookcheck, one standout feature is PowerSense NFC. Tap your phone to the power bank, and the companion app will show you a detailed battery status. It’s a small touch, but it’s the kind of thing that makes you feel like you know how much charge you have left, instead of guessing from a blinking LED.
Baesus
One challenge thin power banks face is heat management. Baseus is aiming to solve this problem with its Glacier Heat Dissipation Structure, paired with a VC heat sink and an aluminum alloy body. Together, they should help keep temperatures under control during charging. There’s also an adaptive temperature control feature to ensure your charging stays stable and safe.
Does it come with anything in the box?
Baseus bundles a short cable with the PicoGo Air that supports up to 60W fast charging, so you can use it with other power adapters. The power bank itself comes in three metallic finishes with an Apple-inspired curved design and strong built-in magnets.
Advertisement
Baesus
The Baseus PicoGo Air is currently available in China for CNY 299, roughly $44. International availability has not been confirmed yet, but Baseus has a solid track record of bringing its PicoGo lineup to global markets, so you can expect it to arrive in the US in the coming months.
You must be logged in to post a comment Login