Historically, moving and pointing a camera while filming was the job of a highly-skilled individual. However, there are machines that can do that, enabling all kinds of fancy movement that is difficult or impossible for a human to recreate. A great example is this pan-tilt build from [immofoto3d.]
The build uses a hefty cradle to mount DSLR-size cameras or similar. It’s controlled in the tilt axis by a chunky NEMA 17 stepper motor hooked up to a belt drive for smooth, accurate movement. Similarly, another stepper motor handles the pan axis, with an option for upgrade if you have a heavier camera rig that needs more torque to spin easily. Named Gantry Bot, it’s an open-source design with source files available, so you can make any necessary tweaks on your own. You will have to bring your own control mechanism, though—telling the stepper motors what to do and how fast to do it is up to you.
It’s a heavy-duty build, this one, and you’ll really want a decent metal-capable CNC to get it done, along with a 3D printer for all the plastic pieces. With that said, we’ve featured some other similar builds that might be more accessible if you don’t have a hardcore machine shop in the basement. If you’ve got your own impressive motion rig in the works, be sure to notify the tipsline!
Virtual Private Networks (VPNs) promise to hide your online activities from prying eyes, but still need to gather some information to work properly.
Understanding exactly what data a VPN collects – and why – can help you decide whether a VPN service truly protects your privacy or simply adds another unwanted layer of surveillance.
From activity logs to the different policy types, we’ll walk you through the typical categories of logs a VPN provider might keep. We’ll explain what a “no-logs” VPN really means, highlight when a VPN’s data collection becomes too risky, and provide you with some practical tips for picking a trustworthy VPN provider.
Article continues below
Advertisement
The most trustworthy VPNs will only log what’s absolutely necessary, but what does that include? (Image credit: Getty Images)
What your VPN needs to collect
A VPN’s primary job is to create an encrypted tunnel between your device and a remote server before forwarding your traffic to the internet.
Advertisement
To do this, most VPN providers keep a handful of basic records. These logs are usually short-lived. They’re also typically aggregated and stripped of personally identifying details. Red flags appear when a provider retains identifying logs.
Connection logs
Connection logs capture the technical handshake that takes place each time you start a VPN session.
Typical entries include your device’s original IP address (the IP address assigned by your ISP), the address of the VPN server you connect to, timestamps marking when the session started and ended, and bandwidth usage.
Advertisement
These logs allow the VPN provider to monitor server load and troubleshoot connectivity problems. Connection logs also allow the VPN to manage the maximum number of simultaneous connections per account.
Since connection logs only record that a connection was made and not what you did while connected, they pose relatively little risk to privacy. That said, retaining the original IP address does link you to the session — but a truly privacy-focused VPN will either quickly discard it or never store it at all.
Activity logs
When a VPN advertises itself as a no-logs service, it’s promising that it doesn’t keep any records of what you do while you’re connected.
Advertisement
These activity or traffic logs are the most serious privacy concern. Activity logs can contain the websites you visited and the DNS lookups that translate domain names into IP addresses. They can even include which apps or online services you used.
If a VPN provider stores any of the above activity logs, it can reconstruct a detailed picture of your online life, defeating the purpose of using a private VPN. A true no-logs VPN should explicitly state that it never records activity logs.
Advertisement
Server-level logs
At the server level, providers may keep minimal data, such as the amount of traffic passing through a particular node or generic error messages.
Having this information helps a VPN provider fine-tune performance and balance loads across the network. It can also help identify hardware failures should they arise.
These logs lack any user-specific identifiers, meaning they’re considered the least intrusive form of data collection.
Advertisement
Aggregated logs
Aggregated logs are big-picture statistics that a VPN collects from many users at once.
Nothing collected points back to you personally. Instead, the VPN records things like the URLs or domains visited, the total bandwidth consumed, or generic timestamps. When this data is combined, it never includes your real IP address, the websites you visit, or any account ID that could identify you.
Even VPNs that claim to be “no-logs” need a small amount of information to keep their service running smoothly. Aggregated logs help them know when to add more servers or when there’s an outage or otherwise unusual activity.
The key thing to watch out for here is whether the VPN collects any identifying logs before aggregating data. Provided there’s no raw identifiable data, aggregation is harmless.
Advertisement
Account and payment logs
A VPN has another set of logs that sit outside the VPN tunnel entirely: account and payment logs.
These typically include the email address you signed up with, the payment method you used, when you created the account, and any customer support tickets you may have opened.
Though these logs don’t reveal what you do online, they can tie that activity to a real identity.
Advertisement
If a VPN keeps detailed account or payment information, it creates a link between you and any network logs it might have. If you’re a particularly privacy-conscious user, you might want to consider providers offering anonymous payment and signup, such as Mullvad.
What a no-logs VPN really means
When a VPN advertises itself as a “no-logs” service, the concept seems simple enough: it doesn’t keep any records of what you do online.
In practice, however, most “no-logs” VPNs still store a small amount of data – just enough to keep the network running smoothly. That data is usually non-identifying, such as generic connection timestamps and total bandwidth used, and never includes things like your real IP address or the websites you visit.
While a no-logs VPN may retain these minimal, anonymized logs for operational reasons, a zero-logs VPN keeps no records at all, including non-identifying data.
So when you see a VPN with a “no-logs” label, treat it as a promise that the VPN limits its data collection to the bare essentials and doesn’t store anything that could directly link activity back to you. If you’re after more complete protection, however, look to zero-log VPNs.
Advertisement
When data collection goes too far
Collecting detailed activity logs undermines the whole point of a VPN. That is, to shield your online activities from snoopers.
When a VPN provider records browsing history, DNS queries, or precise timestamps, it can piece together what you accessed, when, and from where. This can be especially dangerous for users living under restrictive regimes where this information may be used against them.
Even in freer societies, detailed logs are vulnerable to data breaches or may otherwise be sold to third parties or requested by authorities.
Free VPNs are the most common culprits of excessive data collection. Lacking subscription revenue, they often make money by selling user data to third parties. For users who rely on a VPN to browse and communicate privately or bypass internet censorship, any retention of original IP addresses or activity logs dramatically increases risk.
Advertisement
If a malicious actor were to obtain these activity or usage logs, they could correlate them with other data sources to identify you. Some of the risks include legal repercussions as well as harassment.
How to choose a trustworthy VPN
Choosing a VPN that respects your privacy starts with looking beyond marketing slogans and focusing on the provider’s real practices. A trustworthy service will prioritize keeping your online activity hidden while offering much-needed security features.
Stick with trusted, vetted names: Look for VPN providers with a solid track record and transparent ownership. The best secure VPNs are less likely to disappear overnight, leaving your data exposed.
Avoid dodgy free VPNs: Free VPNs often fund themselves by logging and selling user data, including identifying information included. If a VPN is free, assume it’s monetizing you in some way and consider a paid alternative.
Check out the VPN’s privacy policy and audit history: Read the VPN’s privacy policy carefully for explicit statements about data retention. To be safe, prioritize VPN services that have undergone independent audits and publicly share the results.
Check out the add-on features/extras available: The best VPNs strengthen security through extras like a kill switch or Double VPN servers. When these add-ons are well implemented, they can provide an extra layer of security without compromising privacy.
This article is crossposted from IEEE Spectrum’s careers newsletter. Sign up now to get insider tips, expert advice, and practical strategies, written in partnership with tech career development company Parsity and delivered to your inbox for free!
Engineers Aren’t Bad at Communication. They’re Just Speaking to the Wrong Audience.
There’s a persistent myth that engineers are bad communicators. In my experience, that’s not true.
Engineers are often excellent communicators—inside their domain. We’re precise. We’re logical. We structure arguments clearly. We define terms. We reason from constraints.
The breakdown happens when the audience changes.
Advertisement
We’re used to speaking in highly technical language, surrounded by people who share our vocabulary. In that environment, shorthand and jargon are efficient. But outside that bubble, when talking to executives, product managers, marketing teams, or customers, that same precision can be confusing.
The problem isn’t that we can’t communicate. It’s that we forget to translate.
If you’ve ever explained a critical issue or error to a non-technical stakeholder, you’ve probably experienced this: You give a technically accurate explanation. They leave either more confused than before, or more alarmed than necessary.
Suddenly you’re spending more time clarifying your explanation than fixing the issue.
Advertisement
Under pressure, we default to what we know best—technical detail. But detail without context creates cognitive overload. The listener can’t tell what matters, what’s normal, and what’s dangerous.
That’s when the “engineers can’t communicate” narrative shows up.
In reality, we just skipped the translation step.
The Writing Shortcut
One of the simplest ways to improve written communication today is surprisingly easy: Run your explanation through an AI model and ask, “would this make sense to a non-technical audience? Where would someone get confused?”
Advertisement
You can also say:
“Rewrite this for an executive audience.”
“What analogy would help explain this?”
“Simplify this without losing accuracy.”
Large language models are particularly good at identifying jargon and offering alternative framings. They’re essentially translation assistants.
Analogies are especially powerful. If you’re explaining system latency, compare it to traffic congestion. If you’re describing technical debt, compare it to skipping maintenance on a house. If you’re explaining distributed systems, try using supply chain examples.
The goal isn’t to “dumb it down.” It’s to map the unfamiliar onto something familiar.
Before sending an email or report, ask yourself:
Advertisement
Does this audience need to understand the mechanism, or just impact?
Does this explanation help them make a decision?
Have I defined terms they might not know?
Translation When Speaking
When speaking—especially in meetings or presentations—most engineers have one predictable habit: We speak too fast.
Nerves speed us up. Speed causes filler words. Filler words dilute authority.
To prevent that, follow a simple rule: Speak 10 to 15 percent slower than feels natural.
Slowing down cuts down the number of times you say “um” and “uh”, gives you time to think, makes you sound more confident, and gives the listener time to process.
Another rule: Say only what the audience needs to move forward.
Advertisement
Explain just enough for the person to make a decision. If you overload someone with implementation details when they only need tradeoffs, you’ve made their job harder.
The Real Skill
The key skill in communication is audience awareness.
The same engineer who can clearly explain a concurrency bug to a peer can absolutely explain system risk to an executive. The difference is framing, vocabulary, and context. Not intelligence.
In the age of AI, where code generation is increasingly commoditized, the ability to translate complexity into clarity is becoming a defining advantage.
Advertisement
Engineers aren’t bad communicators. We just have to remember that outside our bubble, translation is part of the job.
—Brian
Robert Goddard launched the first liquid-fueled rocket 100 years ago, but his legacy still has relevant lessons for today’s engineers. Although Goddard’s headstrong confidence in his ideas helped bring about the breakthrough, it later became an obstacle in what systems engineer Guru Madhavan calls “the alpha trap.” Madhavan writes: “We love to celebrate the lone genius, yet we depend on teams to bring the flame of genius to the people.”
For Communications of the ACM, two Microsoft engineers propose a model for software engineering in the age of AI: Making the growth of early-in-career developers an explicit organizational goal. Without hiring early-career workers, the profession’s talent pipeline will eventually dry up. So, they argue, companies must hire them and develop talent, even if that comes with a short-term dip in productivity.
Looking for a job? Last year, IEEE Industry Engagement hosted its first virtual career fair to connect recruiters and young professionals. Several more career fairs are now planned, including two upcoming regional events and a global career fair in June. At these fairs, you can participate in interactive sessions, chat with recruiters, and experience video interviews.
Many people base huge swaths of their lives on foundational philosophical texts, yet few have read them in their entirety. The one that springs to the forefront of many of our minds is The Art of Computer Programming by Donald Knuth. Full of many clever and outright revolutionary algorithms and new ways of thinking about how computers work, [Attoparsec] has been attempting to read this tome from cover to cover, and has found some interesting tidbits. One of those is the various algorithms around Gray Codes, and he built this device as a visual aid.
Gray Codes, otherwise known as reflected binary, is a way of ordering an arbitrarily large set of binary values so that only one bit changes between any two of them. The most common place these are utilized is in things like rotary encoders, where it provides better assurance that the position of a shaft is in a known location. To demonstrate this in a more visual way [Attoparsec] hooked up an industrial signal light, normally used for communicating the status of machinery in a factory, and then programmed it to display the various codes. A standard binary counter is used as a reference, and it can also display standard Gray Code as well as a number of other algorithms used for solving similar problems.
[Attoparsec] built this as an interactive display for the Open Sauce festival in San Francisco. To that end it needed to be fairly rugged, so he built it out of old industrial equipment, which is also a fitting theme for the light itself. There’s also a speed controller and an emergency stop button which also add to the motif. For a deeper dive on Gray Codes and their uses, take a look at this feature from a few years back.
Pepper, a New York-based technology platform for independent food distributors, has acquired Alima, a Y Combinator-backed startup that built ordering and procurement software for small food distributors in Latin America. The deal, announced on Tuesday with no disclosed financial terms, brings Alima’s two cofounders into Pepper’s leadership team and extends the company’s push into AI-driven product content and data infrastructure for an industry that still runs largely on phone calls, faxes, and personal relationships.
Jorge Vizcayno, Alima’s chief executive, will lead Pepper’s product content platform and data infrastructure, which uses AI to match and enrich product catalogues at scale. Blanca Espinosa, Alima’s chief marketing officer and cofounder, will head customer implementation, applying AI tooling to the onboarding process that has historically been one of the most friction-heavy parts of selling software to food distributors.
Two companies, one thesis
The acquisition is small in isolation but revealing in what it says about where vertical software for food distribution is heading. Pepper and Alima were built on the same premise: that independent food distributors, who collectively account for more than two-thirds of food distribution in North America and handle over $1.4 trillion in annual sales, are woefully underserved by technology.
Alima, founded in 2021, tackled the problem from the Latin American side, where the gap is even wider. More than 85 per cent of B2B food suppliers and distributors in the region lack digital sales capabilities, according to the company’s own estimates. Alima built an ordering platform for small and mid-sized distributors, focusing initially on fresh produce procurement in Mexico. The company went through Y Combinator’s Winter 2022 batch and raised $1.5 million in seed funding from Soma Capital, YC, The Dorm Room Fund, and angel investors.
Advertisement
The 💜 of EU tech
The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!
Pepper, meanwhile, has grown into a broader platform covering ordering, sales and marketing, accounts receivable, and embedded payments for US-based food distributors. The company has raised $99 million across three rounds, most recently a $50 million Series C in February led by Lead Edge Capital, with participation from ICONIQ, Index Ventures, Greylock, Harmony Partners, and Interplay. It now serves more than 500 distributors representing approximately $30 billion in annual gross merchandise volume.
The AI angle
The strategic logic of the deal centres on product content, the sprawling, fragmented catalogues that food distributors must manage across thousands of SKUs from hundreds of suppliers. In food distribution, product data is notoriously messy: item descriptions vary between suppliers, packaging formats differ by region, and pricing changes frequently. Pepper has been building AI systems to match and enrich this data automatically, and Vizcayno’s experience building similar infrastructure for Latin American distributors makes the acquisition a talent and technology play as much as a market expansion one.
Advertisement
Espinosa’s role is equally telling. Customer implementation, the process of getting a distributor onto a new technology platform, is where many vertical SaaS companies lose deals. Distributors often have limited technical staff, legacy systems that resist integration, and operations that cannot afford downtime during a migration. Pepper is betting that AI-assisted onboarding can compress what has traditionally been a months-long process, and Espinosa’s background in customer acquisition at Alima positions her to lead that effort.
This is Pepper’s second acquisition in seven months. In August 2025, it acquired Kimelo, a distribution toolset that included a restaurant supply ordering app. The pace suggests Pepper is consolidating a fragmented market of small vertical tools into a single platform, a playbook familiar from other industries but still relatively early in food distribution.
A $1.4 trillion market, still on paper
The broader context is that food distribution technology remains in its early innings despite its enormous addressable market. Independent distributors are the backbone of the food supply chain, connecting farms and manufacturers to the restaurants, grocery stores, and institutions that feed people. Yet the industry’s technology adoption lags far behind comparable sectors like logistics, retail, and financial services.
Pepper’s investor list, which includes Index Ventures and Greylock, signals that serious venture capital is flowing into the space. The $50 million Series C in February valued the company at an undisclosed figure but positioned it as the category leader in a market where no dominant platform has yet emerged. The Alima acquisition adds Latin American domain expertise and a bilingual founding team to a company that will likely need to expand beyond the US to justify its funding trajectory.
Advertisement
For Alima’s founders, the framing is pragmatic. Vizcayno described the acquisition as the most honest continuation of Alima’s journey. Whether that honesty reflects strategic alignment or the practical reality that a $1.5 million seed-stage startup in a difficult Latin American market found a faster path to impact inside a better-funded platform is, ultimately, the same thing said two different ways.
A startup called Friending has launched a social platform built around a premise that sounds almost quaint in 2026: helping people make friends by meeting in person. The app, based in Raleigh, North Carolina, connects users by shared interests and geographic proximity, then deliberately limits chat functionality to push them toward face-to-face meetings rather than prolonged online conversations. Every user is verified through a third-party identity service, and the platform can confirm when two users’ phones are physically near each other, a feature designed to validate that meetings actually happen.
The timing is deliberate. In 2023, US Surgeon General Vivek Murthy issued an 82-page advisory declaring loneliness and social isolation a public health epidemic, finding that lacking social connection carries health risks comparable to smoking up to 15 cigarettes per day. Social isolation increases the risk of premature death by 29 per cent, heart disease by 29 per cent, and stroke by 32 per cent. Among older adults, chronic loneliness raises the risk of dementia by approximately 50 per cent. Half of American adults reported experiencing loneliness even before the pandemic.
Friending is far from the first app to try to address this. Bumble BFF launched in 2016 and saw a 16 per cent increase in time spent on its parent platform after adding the feature. Peanut, which connects mothers, has raised $17 million. Yubo, aimed at young adults, has raised $65.7 million. The friendship app category as a whole has attracted more than $84 million in venture capital. Yet none of these platforms has achieved the scale or cultural penetration of dating apps, which suggests either that the market is harder to crack or that the product designs have not yet found the right formula.
What Friending does differently
Friending’s distinguishing feature is its insistence on brevity in online interaction. Where most social platforms optimise for engagement time, measuring success by how long users stay on their screens, Friending treats extended chat as a failure state. The app is designed so that the valuable action is not the conversation but the meeting that follows it. The proximity verification feature, which registers when two users’ phones are physically close, serves as both a safety mechanism and a behavioural nudge: it confirms the meeting happened and reinforces the platform’s core proposition.
Advertisement
The 💜 of EU tech
The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!
The identity verification layer is worth noting in a market where catfishing and fake profiles have eroded trust across social platforms. Friending uses a third-party verification system, though the company has not disclosed which provider it uses or what level of identity confirmation is required.
Gabor Kadas, the company’s founder, has described the app as a response to a paradox he experienced personally: moving between countries and accumulating thousands of online connections while feeling increasingly isolated. The company is currently raising venture capital to fund development and expansion, though it has not disclosed the size of the round or any committed investors.
Advertisement
The harder question
The challenge for any friendship app is not getting people to download it but getting them to use it more than once. Dating apps benefit from a powerful, specific motivation: the desire for romantic connection is urgent enough to overcome the friction of meeting strangers. Friendship is different. The need is real but diffuse, and the social cost of admitting you need an app to make friends remains higher than the cost of admitting you need one to find a date.
There is also the question of whether limiting online interaction actually helps. Research from the New York Academy of Sciences suggests that the relationship between social media and loneliness depends on the type of platform and the nature of the engagement. Active participation, such as responding to posts and sending messages, is associated with reduced loneliness. Passive use, such as scrolling without interacting, is not. By restricting chat, Friending may be removing one of the mechanisms through which users build the comfort and trust necessary to meet a stranger in person.
None of this means the idea lacks merit. The Surgeon General’s advisory was not a passing observation; it was a formal declaration that the country’s social fabric is fraying in ways that produce measurable harm. If Friending can convert even a fraction of the lonely half of America into regular users, it will have found something the larger platforms have not. The question is whether an app that asks people to put down their phones is fighting the problem or fighting human nature at the same time.
Samsung is expanding its already crowded TV lineup for 2026 with a new range of Mini LED 4K UHD models, alongside an updated Neo QLED series that pushes further into premium territory. The strategy is familiar but effective: take the core advantages of Mini LED backlighting; better contrast control, higher brightness, and more precise local dimming, and pair them with a deeper layer of AI-driven processing and smart platform refinements.
There’s a lot to unpack across both categories, so we’re keeping this focused. This article breaks down Samsung’s 2026 Mini LED 4K lineup; where the company is clearly trying to hit the sweet spot between performance and price, while the Neo QLED models, which lean more heavily into flagship features and higher-end positioning, are covered separately.
What Are Samsung Mini LED TVs?
Samsung’s Mini LED TVs are still LCD-based displays, but they use a more advanced form of full-array LED backlighting. The difference comes down to scale: the LEDs are significantly smaller, which allows for far more precise local dimming and better control of light across the screen—especially when rendering bright objects against darker backgrounds.
When paired with HDR formats like HDR10+, this improved backlight control translates into higher peak brightness, better contrast, and expanded color volume. In practical terms, that means a more dynamic and accurate picture without abandoning the proven strengths of LCD technology.
Advertisement
Samsung 2026 Mini LED TV Lineup
For 2026, Samsung is introducing two Mini LED TV series so far: the M80H and M70H. Both models feature 4K UHD resolution, the Tizen smart TV platform, gaming-focused features, and Samsung’s Vision AI Companion for enhanced picture and usability.
M80H
The M80H series is available in screen sizes from 55 to 85 inches, while the M70H series spans a broader range from 43 to 85 inches. Between the two, there’s enough flexibility to match just about any viewing distance or room size without forcing a compromise on features.
M70H
Key Features
Both series are built to deliver a strong 4K UHD viewing experience, with AI-driven processing handling upscaling and scene optimization. The M80H uses Samsung’s NQ4 AI Gen2 Processor, while the M70H relies on the Mini LED Processor 4K, with both designed to enhance clarity and detail on a scene-by-scene basis.
Samsung’s Real Depth Enhancer is included on both models, improving foreground definition and helping key on-screen elements stand out more clearly. The M80H adds AI Customization Mode, which learns your preferred picture settings by genre during setup and then automatically adjusts image quality based on what you’re watching.
Advertisement
For audio, the M80H also includes Active Voice Amplifier, which boosts dialogue and important sound effects to improve clarity—especially useful when background noise tries to steal the scene.
Also, with Q Symphony, the M80H and M70H can be combined with compatible Samsung soundbars and Wi-Fi speakers to operate as a single, coordinated sound system rather than isolated components.
There’s a fair amount of feature overlap between the M80H and M70H models, but key differences remain. We’ve included a detailed comparison chart below to make it easier to see where they separate.
Gaming Hub Cloud Gaming- Xbox, NVIDIA GeForce Now, Luna, Blacknut, Antstream, Boosteroid ALLM (Auto Low Latency Mode) HGIG AI Auto Game Mode Gaming Motion Plus Super Ultra Wide Game View Game Bar Mini Map Zoom AMD FreeSync: Freesync Premium™ Pro Hue Sync
Mobile to TV Sound Mirroring Wireless TV On TV initiates mirroring
Mobile to TV Sound Mirroring Wireless TV On
Multi-View
Up to 2 videos
–
Buds Auto Switch
Yes
–
Works with Apple AirPlay
Yes
Yes
Works with Google Cast
Yes
Yes
Daily+
Yes
–
Now Brief
Yes Voice/User Detection
–
Workout Tracker
Yes
–
Audio
2 Channel speaker system 20 Watts Output Power Object Tracking Sound (OTS) Lite\Q-Symphony Active Voice Amplifier (AVA) Pro Adaptive Sound Plus
2 Channel speaker system 20 Watts Output Power Object Tracking Sound (OTS) Lite\Q-Symphony
Karaoke Mic:
Yes
Yes
Multi-Control
Yes
–
Storage Share:
Yes
–
Security
Knox Vault: N/A Knox Security: Yes
Knox Vault: N/A Knox Security: Yes
Remote Control
Bluetooth Simple Remote TM2280A with batteries
IR Simple Remote TM2240A with batteries
The Bottom Line
Samsung’s 2026 Mini LED lineup sits in a very calculated middle ground. You’re getting the core benefit that actually matters; Mini LED backlighting for better contrast, brightness control, and more consistent HDR performance without paying Neo QLED prices. Add in Tizen, Vision AI, and solid gaming support, and these don’t feel stripped down in daily use. For a lot of buyers, this is where the real value is.
What’s missing is just as important. No Quantum Dot layer means color accuracy and color volume won’t match Samsung’s Neo QLED models, and you’re not getting the full processing and refinement stack reserved for the higher tier. These are for buyers who want a meaningful step up from basic LED TVs without drifting into premium pricing. If you’re chasing reference-level performance, keep walking. If you want a well-equipped 4K Mini LED TV that covers the essentials and then some, this is the safer—and smarter—place to land.
Reddit is stepping up its fight against bots, and now your account could be asked to prove it is human if the platform detects fishy behaviour.
Reddit CEO Steve Huffman says these checks will be rare, but they are meant to protect what makes Reddit work in the first place – real people talking to real people.
As AI-generated content spreads, Reddit admits it is getting harder to tell who is behind a post. So instead of broad crackdowns, it is focusing on suspicious behavior and adding clearer signals across the platform.
How Reddit plans to separate humans from bots
Reddit
If Reddit detects signs of automation or unusual behavior, it may trigger a human verification check. This could involve simple actions like passkeys or FaceID that confirm a human is present.
In some cases, third-party biometric systems like Sam Altman’s World ID may be used. The platform may also use government-issued IDs in regions where laws require them. However, Reddit says that your identity will stay separate from your account.
Advertisement
The company is also standardizing labels for automated accounts. Approved bots will carry an [APP] tag, making it obvious you are interacting with software. Developers will need to register their tools to get this label, which adds a layer of transparency.
Since Reddit says this is not a sitewide verification system, most users might never be asked to prove anything. Even when such checks take place, the focus will be on confirming a human exists, not identifying who that person is.
At the same time, the platform will continue removing harmful bots at scale, already taking down around 100,000 accounts daily. It is also improving reporting tools so users can flag suspicious activity more easily.
Reddit is not banning AI-written posts outright, but it is drawing a firm line. For now, the platform cares less about how content is written and more about who is behind it.
A new info-stealing malware called Torg Grabber is stealing sensitive data from 850 browser extensions, more than 700 of them for cryptocurrency wallets.
Initial access is obtained through the ClickFix technique by hijacking the clipboard and tricking the user into executing a malicious PowerShell command.
According to researchers at cybersecurity company Gen Digital, Torg Grabber is actively developed, with 334 unique samples compiled in three months (between December 2025 and February 2026) and new command-and-control (C2) servers registered every week.
Apart from cryptocurrency wallets, Torg Grabber steals data from 103 password managers and two-factor authentication tools, and 19 note-taking apps.
Advertisement
Rapid evolution
In a technical report this week, Gen Digital researchers say that Torg Grabber’s initial builds used a Telegram-based and then a custom, encrypted TCP protocol for data exfiltration.
On December 18, 2025, the two mechanisms were abandoned in favor of an HTTPS connection routed through Cloudflare infrastructure. The method supports chunked data uploads and payload delivery.
Torg Grabber’s development timeline Source: Gen Digital
The malware features several anti-analysis mechanisms, multi-layered obfuscation, and uses direct syscalls and reflective loading for evasion, running the final payload entirely in memory.
However, the researchers also discovered a standalone tool called Underground, used for extracting browser data.
Advertisement
It injects a DLL reflectively into the browser to access Chrome’s COM Elevation Service and extract the master encryption key, a method also recently seen in VoidStealer.
Extensive data theft capabilities
Gen Digital found that Torg Grabber targets 25 Chromium-based browsers and 8 Firefox variants, trying to steal credentials, cookies, and autofill data.
Of the 850 browser extensions it targets, 728 are for cryptocurrency wallets, covering “essentially every crypto wallet ever conceived by human optimism.”
“The marquee names are all there – MetaMask, Phantom, TrustWallet, Coinbase, Binance, Exodus, TronLink, Ronin, OKX, Keplr, Rabby, Sui, Solflare,” the researchers say.
Advertisement
“But the list doesn’t stop at the big names. It keeps going, deep into the long tail, past projects with install counts you could fit in a phone booth.”
Apart from wallets, the malware also targets a large list of 103 extensions for passwords, tokens, and authenticators: LastPass, 1Password, Bitwarden, KeePass, NordPass, Dashlane, ProtonPass, Enpass, Psono, Pleasant Password Server, heylogin, 2FAAuth, GAuth, TOTP Authenticator, and Akamai MFA.
Torg Grabber also targets information from Discord, Telegram, Steam, VPN apps, FTP apps, email clients, password managers, and desktop cryptocurrency wallet apps.
The malware can also profile the host, create a hardware fingerprint, document installed software (including 24 antivirus tools), take screenshots of the user’s desktop, and steal files from the Desktop/Documents folders.
Advertisement
Also notable is its capability to execute shellcode on the compromised device, delivered in ChaCha-encrypted zlib-compressed form from the C2.
Gen Digital cautions that Torg Grabber continues to develop rapidly, registering new C2 domains weekly, and that its operator base is expanding, with 40 tags documented by the time of analysis.
Malware is getting smarter. The Red Report 2026 reveals how new threats use math to detect sandboxes and hide in plain sight.
Download our analysis of 1.1 million malicious samples to uncover the top 10 techniques and see if your security stack is blinded.
The first of six autonomous public buses has reached Singapore and will be tested on routes 400 in Marina Bay and 191 in one-north from the second half of 2026 as part of a three-year pilot programme.
In a Facebook video on Mar 25, the Singapore Land Transport Authority (LTA) said that the six buses will be rigorously tested to ensure that they meet all safety and operating requirements before they hit the roads.
When ready, the self-driving public buses will operate alongside existing manned buses, allowing LTA to maintain routes with lower ridership or introduce new services that are currently difficult to introduce due to manpower constraints.
In its Facebook video, LTA offered a glimpse of the driverless bus interior./ Screengrab from LTA
The video from LTA revealed a 16-seater bus with features that resemble those of existing public buses. It also showed a space designated for a wheelchair.
Cameras and sensors are seen mounted on the front, rear and top of the autonomous bus, providing operators with a 360-degree view of the surroundings.
Advertisement
LTA noted that further preparation is needed before testing begins.
The tests will include LTA’s closed-circuit assessment, consisting of basic manoeuvres and safe passenger boarding and alighting at all designated stops.
Service 400 connects Marina Bay and Shenton Way, stopping at Marina Bay Cruise Centre, Gardens by the Bay, Shenton Way and Downtown MRT stations.
Whereas service 191 loops through one-north, with stops at Buona Vista bus terminal, one-north MRT, and Buona Vista MRT.
Advertisement
Following this deployment, LTA may procure up to 14 additional autonomous buses and expand the pilot to more public bus services.
LTA first teased the launch of driverless public buses last Oct, where it awarded a contract for the pilot deployment of autonomous buses to a consortium of MKX Technologies Pte Ltd, Zhidao Network Technology (Beijing) Co. Ltd and BYD (Singapore) Pte Ltd for a contract sum of around S$8.14 million to pilot autonomous buses.
The consortium will also work with the Singapore Bus Academy to train existing bus captains to take on new roles as safety operators, so that they are equipped to operate the autonomous buses competently and confidently.
Read other articles we’ve written on Singaporean businesses here.
New submitter haroldbasset writes: Canada’s Immigration Department rejected an applicant because the duties of her current job did not match the Canadian work experience she had claimed, but the Department’s AI assistant had invented that work experience. She has been working in Canada as a health scientist — she has a Ph.D. in the immunology of aging — but the AI genius instead described her as “wiring and assembling control circuits, building control and robot panels, programming and troubleshooting.” “It’s believed to be the first time that the department explicitly referred to the use of generative AI to support application processing in immigration refusals,” reports the Toronto Star. “The disclaimer also noted that all generated content was verified by an officer and that generative AI was not used to make or recommend a decision.”
The applicant’s lawyer was shocked “how any human being could make this decision.” “Somehow, it hallucinated my client’s job description,” he said. “I would love to see what the officer saw. Something seriously went wrong here.”
The applicant’s refusal came just as Canada’s Immigration Department released its first AI strategy, which frames artificial intelligence as a way to improve efficiency, service delivery, and program integrity. The department says it has long used digital tools like analytics and automation to flag fraud risks and triage applications, and is now also experimenting with generative AI for tasks such as research, summarizing, and analysis. In this case, however, the department insisted the decision was made by a human officer and that generative AI was not involved in the final decision.
You must be logged in to post a comment Login