Connect with us
DAPA Banner
DAPA Coin
DAPA
COIN PAYMENT ASSET
PRIVACY · BLOCKDAG · HOMOMORPHIC ENCRYPTION · RUST
ElGamal Encrypted MINE DAPA
🚫 GENESIS SOLD OUT
DAPAPAY COMING

Tech

NYT Connections hints and answers for Sunday, May 10 (game #1064)

Published

on

Looking for a different day?

A new NYT Connections puzzle appears at midnight each day for your time zone – which means that some people are always playing ‘today’s game’ while others are playing ‘yesterday’s’. If you’re looking for Saturday’s puzzle instead then click here: NYT Connections hints and answers for Saturday, May 9 (game #1063).

Good morning! Let’s play Connections, the NYT’s clever word game that challenges you to group answers in various categories. It can be tough, so read on if you need Connections hints.

Advertisement

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Running Your Own 3G Network

Published

on

CDMA2000 was one of the protocols defined for 3G networks and is now years out of date and being phased out worldwide. Nevertheless, there are still vast numbers of phones that will happily connect to it, creating an opportunity for hackers seeking to run their own cellular networks. [Chrismoos] recently made this endeavour significantly easier by releasing 1xBTS, a Rust implementation of the lower three layers of a CDMA2000 network.

The lowest layer of the stack is an SDR for the actual radio communications. It’s been tested with the USRP B200 and B210, the LimeSDR Mini 2, and the BladeRF Micro 2.0. The code might work with certain other SDRs using the SoapySDR abstraction layer. The SDR is controlled by the base station (BTS) software, which, in turn, is controlled by the base station controller (BSC) over an Abis link. The BSC manages channels and mobile device associations, and exchanges frames with the mobile switching center (MSC), which handles message switching.

The stack includes standard 3G verification; before a handset can authenticate to the network, its details must be added to the home location register (HLR). Once authenticated, the handset can access all standard services: inbound and outbound voice calls via a SIP gateway, inbound and outbound SMS, and data packet transfers. A web dashboard provides a convenient management platform that includes packet tracing.

Advertisement

It should be noted that using this carelessly is legally hazardous; radio transmissions are strictly regulated in most countries, particularly in the cellular bands. If you’d still like to run your own cell network, we’ve also seen a few other efforts, such as this 4G implementation, this 1G recreation, and a GSM network made for a hacker camp.

Source link

Advertisement
Continue Reading

Tech

Fireteam Elite 2, Arriving This Summer

Published

on





If the co-op alien-blasting action of 2021’s Aliens: Fireteam Elite was your cup of tea, we have good news: A sequel is on the way. The second installment will expand the cap to four players while adding new classes and weapons.

Aliens: Fireteam Elite 2 doesn’t appear to reinvent the wheel from its arcade-like predecessor. “Xenomorphs stalk the corridors, ambush from the shadows, and swarm in overwhelming numbers,” the shooter’s announcement reads. If that isn’t (also) a description of the first game, I don’t know what is.

Advertisement

Of course, there are upgrades, and not only its sharper-looking graphics. You can play with a larger squad: Four players can take aim at alien scum, up from three in the original. Pathogens and Weyland-Yutani combat synthetics will pose new obstacles. There’s even a new build-your-own Specialist class that should add more versatility. Developer Cold Iron Studios also promises deeper squad mechanics and a wider selection of weapons to use across classes.

The title will take you through “immersive new environments across the Aliens universe” as we approach the 40th anniversary(!!) of James Cameron’s 1986 blockbuster this July. That upcoming milestone may also contributed to the recent release of the first Alien: Isolation sequel teaser.

Aliens: Fireteam Elite 2 is scheduled to arrive “this summer.” It will be available on PS5, Xbox Series X/S, and PC (Steam and Epic).

Advertisement



Source link

Advertisement
Continue Reading

Tech

Auto Enthusiast Carves Functional Two-Stroke Engine from Solid Metal

Published

on

Homemade DIY Billet Aluminum Two-Stroke Engine
Camden Bowen has spent years chasing the perfect two-stroke engine built entirely from scratch. His earlier versions came from a 3D printer and then from parts picked up at the hardware store. Each one taught him something new about what works and what falls short under real fire. For his latest project he set a higher bar and machined the whole thing from billet aluminum on a basic mill and lathe.



He started with a clean design for a single-cylinder two-stroke engine. The crankcase was divided into two bolted sections, so everything fit together without the frustration of squeezing parts into a tight spot – just a single bolt and it all came together without too much trouble. That decision alone likely saved hours in the final assembly. Every key component began life as a length of aluminum bar or plate. Gary Bowen, also known as Bowen, carefully turned and machined them down with great patience, shaping the cylinder, crankcase, and mounting points until they all fit together like precision puzzle pieces.

Sale


WINGIFT V8 Engine Building Set,V8 Engine Model kit that Run,Build Your Own STEM Mini V8 Model Engine Kit…
  • 【V8 Model Engine Kit】This classic 8-cylinder internal combustion engine is a replica of a car engine. It accurately restores every detail of the…
  • 【Transparent Cylinder Head Design】This visible V8 engine model kit comes with an L-type motor and a battery box (requires 6 AA batteries, not…
  • 【Educational and Family Fun】This small V8 engine kit that runs is not only a beautiful mechanical model, but also a high-quality STEM educational…

Homemade DIY Billet Aluminum Two-Stroke Engine
However, the rotating assembly required a great deal of attention. He manufactured a crank pin and crank webbing from steel before riveting plates onto a solid shaft to form the crankshaft. The connecting rod was made from a half-inch-thick steel bar. He bored the holes with a 95mm offset and made the rod slightly longer than typical to compensate for side loads on the piston walls. Bowen didn’t bother with a complex rotary fixture on the mill table; instead, he held the rod in a vise with grippers and chopped the ends by hand. It was clean enough to make you forget that he had to improvise like that.

Homemade DIY Billet Aluminum Two-Stroke Engine
He then turned the flywheel from a 4 inch steel disk. One end of the crankshaft had a five-degree taper that corresponded to a matching bore in the flywheel. A keyway dealt with the ignition cam, and pinned joints kept everything straight. Manual threading on the ends was a nightmare; the dies would seize unevenly, leaving a few threads badly damaged. Bowen slammed the parts together, put a couple of tack welds where real welds would have made things too difficult to access later on, and called it good.

Homemade DIY Billet Aluminum Two-Stroke Engine
Camden made the piston by turning an aluminum blank to size and carving two ring grooves at the top. The wrist pin holes were reamed to provide a 12mm tight press fit. The cylinder was a bit of a challenge because he first tried to make it out of a square block cut from a cast block in a lathe but snapped an endmill, destroying the piece. So he converted to round stock, drilled the center bit straight, and pressed in a cast iron sleeve. The sleeve provided a firmer surface for the steel rings to ride on without the need for further lubrication.

Homemade DIY Billet Aluminum Two-Stroke Engine
After all of the machining was completed, assembly felt pretty straightforward. Bowen simply dropped the crankshaft into the divided crankcase, placed the connecting rod onto the pin, and lowered the piston into the sleeved cylinder. A few bolts held the parts together, and then he installed a 12 volt coil pack and built up a rudimentary pointed setup on a rotatable bracket so that the timing could be adjusted by hand; no electronics or sensors were required, just a basic spark at the proper time.

Homemade DIY Billet Aluminum Two-Stroke Engine
Pressure testing came first, as he tightened the cylinder and began cranking the piston all the way up to roughly 150 pounds per square inch on the gauge, which indicated that the rings and seals were working properly. After that, he mixed up a batch of gasoline with a little extra oil to be safe and put it into the tank before hand-spinning the flywheel to prime the engine’s crankcase.

Homemade DIY Billet Aluminum Two-Stroke Engine
On the first pull, the engine fired right away, starting smoothly and rapidly settling into a strong loud idle, but it was cut short due to the carbon monoxide, so Bowen weighed anchor and moved the whole thing outside to give it some more time to run. Later inspection of the tape revealed that the flywheel wobbled slightly, most likely due to one of those hand-cut threads, but the engine continued to run without missing a beat. Everything else checked out fine.
[Source]

Advertisement

Source link

Continue Reading

Tech

So you’ve heard these AI terms and nodded along; let’s fix that

Published

on

Artificial intelligence is changing the world, and simultaneously inventing a whole new language to describe how it’s doing it. Spend five minutes reading about AI and you’ll run into LLMs, RAG, RLHF, and a dozen other terms that can make even very smart people in the tech world feel insecure. This glossary is our attempt to fix that. We update it regularly as the field evolves, so consider it a living document, much like the AI systems it describes.


Artificial general intelligence, or AGI, is a nebulous term. But it generally refers to AI that’s more capable than the average human at many, if not most, tasks. OpenAI CEO Sam Altman once described AGI as the “equivalent of a median human that you could hire as a co-worker.” Meanwhile, OpenAI’s charter defines AGI as “highly autonomous systems that outperform humans at most economically valuable work.” Google DeepMind’s understanding differs slightly from these two definitions; the lab views AGI as “AI that’s at least as capable as humans at most cognitive tasks.” Confused? Not to worry — so are experts at the forefront of AI research.

An AI agent refers to a tool that uses AI technologies to perform a series of tasks on your behalf — beyond what a more basic AI chatbot could do — such as filing expenses, booking tickets or a table at a restaurant, or even writing and maintaining code. However, as we’ve explained before, there are lots of moving pieces in this emergent space, so “AI agent” might mean different things to different people. Infrastructure is also still being built out to deliver on its envisaged capabilities. But the basic concept implies an autonomous system that may draw on multiple AI systems to carry out multistep tasks.

Think of API endpoints as “buttons” on the back of a piece of software that other programs can press to make it do things. Developers use these interfaces to build integrations — for example, allowing one application to pull data from another, or enabling an AI agent to control third-party services directly without a human manually operating each interface. Most smart home devices and connected platforms have these hidden buttons available, even if ordinary users never see or interact with them. As AI agents grow more capable, they are increasingly able to find and use these endpoints on their own, opening up powerful — and sometimes unexpected — possibilities for automation.

Advertisement

Given a simple question, a human brain can answer without even thinking too much about it — things like “which animal is taller, a giraffe or a cat?” But in many cases, you often need a pen and paper to come up with the right answer because there are intermediary steps. For instance, if a farmer has chickens and cows, and together they have 40 heads and 120 legs, you might need to write down a simple equation to come up with the answer (20 chickens and 20 cows).

In an AI context, chain-of-thought reasoning for large language models means breaking down a problem into smaller, intermediate steps to improve the quality of the end result. It usually takes longer to get an answer, but the answer is more likely to be correct, especially in a logic or coding context. Reasoning models are developed from traditional large language models and optimized for chain-of-thought thinking thanks to reinforcement learning.

(See: Large language model)

Techcrunch event

Advertisement

San Francisco, CA
|
October 13-15, 2026

This is a more specific concept that an “AI agent,” which means a program that can take actions on its own, step by step, to complete a goal. A coding agent is a specialized version applied to software development. Rather than simply suggesting code for a human to review and paste in, a coding agent can write, test, and debug code autonomously, handling the kind of iterative, trial-and-error work that typically consumes a developer’s day. These agents can operate across entire codebases, spotting bugs, running tests, and pushing fixes with minimal human oversight. Think of it like hiring a very fast intern who never sleeps and never loses focus — though, as with any intern, a human still needs to review the work.

Advertisement

Although somewhat of a multivalent term, compute generally refers to the vital computational power that allows AI models to operate. This type of processing fuels the AI industry, giving it the ability to train and deploy its powerful models. The term is often a shorthand for the kinds of hardware that provides the computational power — things like GPUs, CPUs, TPUs, and other forms of infrastructure that form the bedrock of the modern AI industry.

A subset of self-improving machine learning in which AI algorithms are designed with a multi-layered, artificial neural network (ANN) structure. This allows them to make more complex correlations compared to simpler machine learning-based systems, such as linear models or decision trees. The structure of deep learning algorithms draws inspiration from the interconnected pathways of neurons in the human brain.

Deep learning AI models are able to identify important characteristics in data themselves, rather than requiring human engineers to define these features. The structure also supports algorithms that can learn from errors and, through a process of repetition and adjustment, improve their own outputs. However, deep learning systems require a lot of data points to yield good results (millions or more). They also typically take longer to train compared to simpler machine learning algorithms — so development costs tend to be higher.

(See: Neural network)

Advertisement

Diffusion is the tech at the heart of many art-, music-, and text-generating AI models. Inspired by physics, diffusion systems slowly “destroy” the structure of data — for example, photos, songs, and so on — by adding noise until there’s nothing left. In physics, diffusion is spontaneous and irreversible — sugar diffused in coffee can’t be restored to cube form. But diffusion systems in AI aim to learn a sort of “reverse diffusion” process to restore the destroyed data, gaining the ability to recover the data from noise.

Distillation is a technique used to extract knowledge from a large AI model with a ‘teacher-student’ model. Developers send requests to a teacher model and record the outputs. Answers are sometimes compared with a dataset to see how accurate they are. These outputs are then used to train the student model, which is trained to approximate the teacher’s behavior.

Distillation can be used to create a smaller, more efficient model based on a larger model with a minimal distillation loss. This is likely how OpenAI developed GPT-4 Turbo, a faster version of GPT-4.

While all AI companies use distillation internally, it may have also been used by some AI companies to catch up with frontier models. Distillation from a competitor usually violates the terms of service of AI API and chat assistants.

Advertisement

This refers to the further training of an AI model to optimize performance for a more specific task or area than was previously a focal point of its training — typically by feeding in new, specialized (i.e., task-oriented) data. 

Many AI startups are taking large language models as a starting point to build a commercial product but are vying to amp up utility for a target sector or task by supplementing earlier training cycles with fine-tuning based on their own domain-specific knowledge and expertise.

(See: Large language model [LLM])

A GAN, or Generative Adversarial Network, is a type of machine learning framework that underpins some important developments in generative AI when it comes to producing realistic data — including (but not only) deepfake tools. GANs involve the use of a pair of neural networks, one of which draws on its training data to generate an output that is passed to the other model to evaluate.

Advertisement

The two models are essentially programmed to try to outdo each other. The generator is trying to get its output past the discriminator, while the discriminator is working to spot artificially generated data. This structured contest can optimize AI outputs to be more realistic without the need for additional human intervention. Though GANs work best for narrower applications (such as producing realistic photos or videos), rather than general purpose AI.

Hallucination is the AI industry’s preferred term for AI models making stuff up – literally generating information that is incorrect. Obviously, it’s a huge problem for AI quality. 

Hallucinations produce GenAI outputs that can be misleading and could even lead to real-life risks — with potentially dangerous consequences (think of a health query that returns harmful medical advice).

The problem of AIs fabricating information is thought to arise as a consequence of gaps in training data. Hallucinations are contributing to a push toward increasingly specialized and/or vertical AI models — i.e. domain-specific AIs that require narrower expertise – as a way to reduce the likelihood of knowledge gaps and shrink disinformation risks.

Advertisement

Inference is the process of running an AI model. It’s setting a model loose to make predictions or draw conclusions from previously seen data. To be clear, inference can’t happen without training; a model must learn patterns in a set of data before it can effectively extrapolate from this training data.

Many types of hardware can perform inference, ranging from smartphone processors to beefy GPUs to custom-designed AI accelerators. But not all of them can run models equally well. Very large models would take ages to make predictions on, say, a laptop versus a cloud server with high-end AI chips.

[See: Training]

Large language models, or LLMs, are the AI models used by popular AI assistants, such as ChatGPT, Claude, Google’s Gemini, Meta’s AI Llama, Microsoft Copilot, or Mistral’s Le Chat. When you chat with an AI assistant, you interact with a large language model that processes your request directly or with the help of different available tools, such as web browsing or code interpreters.

Advertisement

LLMs are deep neural networks made of billions of numerical parameters (or weights, see below) that learn the relationships between words and phrases and create a representation of language, a sort of multidimensional map of words.

These models are created from encoding the patterns they find in billions of books, articles, and transcripts. When you prompt an LLM, the model generates the most likely pattern that fits the prompt.

(See: Neural network)

Memory cache refers to an important process that boosts inference (which is the process by which AI works to generate a response to a user’s query). In essence, caching is an optimization technique, designed to make inference more efficient. AI is obviously driven by high-octane mathematical calculations and every time those calculations are made, they use up more power. Caching is designed to cut down on the number of calculations a model might have to run by saving particular calculations for future user queries and operations. There are different kinds of memory caching, although one of the more well-known is KV (or key value) caching. KV caching works in transformer-based models, and increases efficiency, driving faster results by reducing the amount of time (and algorithmic labor) it takes to generate answers to user questions.   

Advertisement

(See: Inference)  

A neural network refers to the multi-layered algorithmic structure that underpins deep learning — and, more broadly, the whole boom in generative AI tools following the emergence of large language models. 

Although the idea of taking inspiration from the densely interconnected pathways of the human brain as a design structure for data processing algorithms dates all the way back to the 1940s, it was the much more recent rise of graphical processing hardware (GPUs) — via the video game industry — that really unlocked the power of this theory. These chips proved well suited to training algorithms with many more layers than was possible in earlier epochs — enabling neural network-based AI systems to achieve far better performance across many domains, including voice recognition, autonomous navigation, and drug discovery.

(See: Large language model [LLM])

Advertisement

Open source refers to software — or, increasingly, AI models — where the underlying code is made publicly available for anyone to use, inspect, or modify. In the AI world, Meta’s Llama family of models is a prominent example; Linux is the famous historical parallel in operating systems. Open source approaches allow researchers, developers, and companies around the world to build on top of one another’s work, accelerating progress and enabling independent safety audits that closed systems cannot easily provide. Closed source means the code is private — you can use the product but not see how it works, as is the case with OpenAI’s GPT models — a distinction that has become one of the defining debates in the AI industry.

Parallelization means doing many things at the same time instead of one after another — like having 10 employees working on different parts of a project at the same time instead of one employee doing everything sequentially. In AI, parallelization is fundamental to both training and inference: modern GPUs are specifically designed to perform thousands of calculations in parallel, which is a big reason why they became the hardware backbone of the industry. As AI systems grow more complex and models grow larger, the ability to parallelize work across many chips and many machines has become one of the most important factors in determining how quickly and cost-effectively models can be built and deployed. Research into better parallelization strategies is now a field of study in its own right.

RAMageddon is the fun new term for a not-so-fun trend that is sweeping the tech industry: an ever-increasing shortage of random access memory, or RAM chips, which power pretty much all the tech products we use in our daily lives. As the AI industry has blossomed, the biggest tech companies and AI labs — all vying to have the most powerful and efficient AI — are buying so much RAM to power their data centers that there’s not much left for the rest of us. And that supply bottleneck means that what’s left is getting more and more expensive.

That includes industries like gaming (where major companies have had to raise prices on consoles because it’s harder to find memory chips for their devices), consumer electronics (where memory shortage could cause the biggest dip in smartphone shipments in more than a decade), and general enterprise computing (because those companies can’t get enough RAM for their own data centers). The surge in prices is only expected to stop after the dreaded shortage ends but, unfortunately, there’s not really much of a sign that’s going to happen anytime soon.  

Advertisement

Reinforcement learning is a way of training AI where a system learns by trying things and receiving rewards for correct answers — like training your beloved pet with treats, except the “pet” in this scenario is a neural network and the “treat” is a mathematical signal indicating success. Unlike supervised learning, where a model is trained on a fixed dataset of labeled examples, reinforcement learning lets a model explore its environment, take actions, and continuously update its behavior based on the feedback it receives. This approach has proven especially powerful for training AI to play games, control robots, and, more recently, sharpen the reasoning ability of large language models. Techniques like reinforcement learning from human feedback, or RLHF, are now central to how leading AI labs fine-tune their models to be more helpful, accurate, and safe.

When it comes to human-machine communication, there are some obvious challenges — people communicate using human language, while AI programs execute tasks through complex algorithmic processes informed by data. Tokens bridge that gap: they are the basic building blocks of human-AI communication, representing discrete segments of data that have been processed or produced by an LLM. They are created through a process called tokenization, which breaks down raw text into bite-sized units a language model can digest, similar to how a compiler translates human language into binary code a computer can understand. In enterprise settings, tokens also determine cost — most AI companies charge for LLM usage on a per-token basis, meaning the more a business uses, the more it pays.

So again, tokens are the small chunks of text — often parts of words rather than whole ones — that AI language models break language into before processing it; they are roughly analogous to “words” for the purposes of understanding AI workloads. Throughput refers to how much can be processed in a given period of time, so token throughput is essentially a measure of how much AI work a system can handle at once. High token throughput is a key goal for AI infrastructure teams, since it determines how many users a model can serve simultaneously and how quickly each of them receives a response. AI researcher Andrej Karpathy has described feeling anxious when his AI subscriptions sit idle — echoing the feeling he had as a grad student when expensive computer hardware wasn’t being fully utilized — a sentiment that captures why maximizing token throughput has become something of an obsession in the field.

Developing machine learning AIs involves a process known as training. In simple terms, this refers to data being fed in in order that the model can learn from patterns and generate useful outputs. Essentially, it’s the process of the system responding to characteristics in the data that enables it to adapt outputs towards a sought-for goal — whether that’s identifying images of cats or producing a haiku on demand.

Advertisement

Training can be expensive because it requires lots of inputs, and the volumes required have been trending upwards — which is why hybrid approaches, such as fine-tuning a rules-based AI with targeted data, can help manage costs without starting entirely from scratch.

[See: Inference]

A technique where a previously trained AI model is used as the starting point for developing a new model for a different but typically related task – allowing knowledge gained in previous training cycles to be reapplied. 

Transfer learning can drive efficiency savings by shortcutting model development. It can also be useful when data for the task that the model is being developed for is somewhat limited. But it’s important to note that the approach has limitations. Models that rely on transfer learning to gain generalized capabilities will likely require training on additional data in order to perform well in their domain of focus

Advertisement

(See: Fine tuning)

Weights are core to AI training, as they determine how much importance (or weight) is given to different features (or input variables) in the data used for training the system — thereby shaping the AI model’s output. 

Put another way, weights are numerical parameters that define what’s most salient in a dataset for the given training task. They achieve their function by applying multiplication to inputs. Model training typically begins with weights that are randomly assigned, but as the process unfolds, the weights adjust as the model seeks to arrive at an output that more closely matches the target.

For example, an AI model for predicting housing prices that’s trained on historical real estate data for a target location could include weights for features such as the number of bedrooms and bathrooms, whether a property is detached or semi-detached, whether it has parking, a garage, and so on. 

Advertisement

Ultimately, the weights the model attaches to each of these inputs reflect how much they influence the value of a property, based on the given dataset.

Validation loss is a number that tells you how well an AI model is learning during training — and lower is better. Researchers track it closely as a kind of real-time report card, using it to decide when to stop training, when to adjust hyperparameters, or whether to investigate a potential problem. One of the key concerns it helps flag is overfitting, a condition in which a model memorizes its training data rather than truly learning patterns it can generalize to new situations. Think of it as the difference between a student who genuinely understands the material and one who simply memorized last year’s exam — validation loss helps reveal which one your model is becoming.

This article is updated regularly with new information.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

Advertisement

Source link

Continue Reading

Tech

Beatbot’s Anniversary Campaign Turns Pool Cleaning Into a Smarter Buying Decision

Published

on

Pool ownership has always come with a predictable trade-off. You get the luxury of a private outdoor space, but you also inherit a maintenance cycle that rarely stays contained. Even with robotic cleaners, the promise of automation has often been partial. Floors get cleaned, but surface debris still needs attention. Walls are covered, but corners and shallow zones are inconsistent. The effort reduces, but it doesn’t disappear.

Beatbot is trying to bridge that gap with a wide, clearly segmented lineup built around cordless pool cleaning, complete coverage, and less manual work across different types of pools and users. More importantly, the company is reinforcing that approach through its Anniversary Campaign, running from May 9 to May 25, where a mix of seasonal offers, best deals, and smart upgrades makes these systems more accessible without diluting their positioning. During this period, the entire product range will be available at attractive discounts, ranging from 6% to as high as 40%.

That pricing shifts the focus from cost to choice. Whether you are stepping in through an entry-level upgrade or moving toward a system that delivers reliable cleaning with minimal follow-up, the lineup becomes easier to navigate based on need rather than price alone. Each product carries a defined role and outcome, which reduces the need to compare features in isolation.

With that in place, the lineup begins to make more sense when looked at closely.

Advertisement

Sora 70: reducing pool care to a single, complete cycle

The Sora 70 is where Beatbot makes its strongest case for complete coverage as a practical, everyday benefit rather than a premium add-on. Built around 4-in-1 cleaning within a cordless pool cleaning system, it covers floor, walls, waterline, and surface in one continuous cycle, removing the need to treat surface debris as a separate task.

That shift matters in real usage. Leaves and floating particles are handled alongside structural cleaning, which reduces interruptions between cycles and cuts down on manual follow-up. With 6,800 GPH suction power, the system is designed to deliver reliable cleaning across both fine particles and heavier debris in a single run, avoiding the need for repeat cleaning.

The 10,000mAh battery supports extended runtimes of up to seven hours for surface cleaning or five hours for floor cleaning, making it suitable for larger residential pools. A 6L debris basket further reduces how often it needs to be emptied, keeping the experience closer to hands-free and reinforcing the idea of less manual work across the entire cleaning cycle.

Features like smart surface parking and shallow-platform accessibility ensure that the process does not break at the final step, making retrieval easier while covering areas like tanning ledges that are often overlooked.

Within the Anniversary Campaign, the Sora 70 is available at $1,149, down from $1,499, reflecting a 23% discount. For a system that brings together surface cleaning, full coverage, and extended runtime in a single device, that pricing shifts it from a considered upgrade to one of the more compelling buys in the current window.

Advertisement

Seen in that context, it stands out as a reliable cleaning solution and a practical choice for users looking to move toward full coverage without adding complexity to their routine.

Sora 30: the smart upgrade for stronger everyday cleaning

The Sora 30 is built for users who want more dependable, day-to-day cleaning without stepping into full-system automation. Its 3-in-1 cleaning across floor, walls, and waterline within a cordless pool cleaning system focuses on the areas that define routine maintenance, making it a more capable alternative to entry-level options.

Its strength lies in consistency. Dual roller brushes improve wall climbing and traction, while the filtration system captures both larger debris and finer particles in a single cycle. With up to five hours of runtime, it is capable of completing a full cleaning session for most residential pools without interruption, helping reduce manual effort across everyday cleaning.

Smart surface parking simplifies retrieval, and its ability to navigate shallow zones adds flexibility that is often missing in this segment. As part of the Anniversary pricing at $699, down from $999, reflecting a 30% discount, it sits in a space that feels like a smart upgrade rather than a compromise.

It delivers reliable cleaning with broader coverage while remaining a practical choice for everyday use, making it one of the more well-balanced options in the current campaign window.

Advertisement

Sora 10: an entry-level upgrade that replaces manual effort

The Sora 10 focuses on making cordless pool cleaning accessible without reducing it to a basic experience. It delivers 3-zone cleaning across floor, walls, and waterline, offering a hands-free alternative to manual maintenance for first-time users and helping reduce day-to-day manual work.

Its 7,800mAh battery and compact design make it easy to deploy, particularly in smaller pools, while its ability to navigate shallow areas as low as 12 inches ensures that sections like tanning ledges are not left unattended. The 5L debris basket and dual roller brushes support consistent cleaning performance that goes beyond surface-level results, making it reliable enough for regular use rather than occasional cleaning.

Priced at $499 during the Anniversary Campaign, down from $699 with a 29% discount, the Sora 10 makes it easier to move away from manual cleaning and into a more hands-free routine without overthinking the upgrade.

It stands out as a practical choice for first-time buyers, offering a straightforward, worthwhile upgrade that replaces manual cleaning without adding complexity.

AquaSense X: extending pool care into a premium, low-intervention system

While the Sora series drives adoption and everyday usability, the AquaSense X is designed to establish what advanced pool cleaning can look like when automation is pushed further into a truly low-intervention experience.

It combines intelligent navigation with HybridSense AI mapping to scan and optimise cleaning paths across the entire pool, delivering high-performance all-zone coverage with greater precision. What sets it apart further is its AstroRinse station, which automatically cleans the filter and removes collected debris after each cycle, reducing one of the last remaining manual steps in robotic pool care.

Advertisement

This is supported by a large-capacity debris system that allows for extended use without frequent emptying, along with advanced filtration and water clarification that target fine particles often left behind by standard cleaning cycles. The result is not just a cleaner pool, but one that maintains clarity and consistency over time.

Features like smart surface parking further simplify retrieval, ensuring that the system remains easy to handle despite its advanced capabilities. Taken together, the AquaSense X moves beyond conventional robotic cleaning and into a more complete, intelligent pool care system built around advanced automation and premium performance.

At $3,999 during the Anniversary Campaign, down from $4,250, the AquaSense X brings flagship-level innovation and advanced automation within closer reach, making it one of the most compelling premium upgrades for users looking to redefine how their pool is maintained.

iSkim: reducing the most repetitive part of pool maintenance

Surface debris is one of the few aspects of pool maintenance that remains constant, accumulating throughout the day regardless of cleaning cycles. The iSkim addresses this by operating as a dedicated robotic skimmer that works continuously rather than periodically, turning what is usually a reactive task into a more automated, low-maintenance process.

Its 9L debris basket, combined with a wide skimming inlet, allows it to capture everything from fine particles to larger debris while significantly reducing how often it needs to be emptied. Powered by a 10,000mAh battery with up to 28 hours of runtime, along with a 24W solar panel enabling 24/7 cleaning, it is designed to maintain surface clarity throughout the day rather than just during cleaning cycles.

Advertisement

This changes how surface cleaning fits into the overall routine. Instead of stepping in repeatedly to manage debris, the system works in the background to keep the pool consistently ready, making it a more hassle-free pool care solution over time. Durable construction backed by a 3-year warranty further reinforces its long-term value.

At $419 during the Anniversary Campaign, down from $499, reflecting a 16% discount, the iSkim combines less emptying, 24/7 solar cleaning, and low-maintenance performance into a smart pool upgrade that reduces ongoing effort rather than just periodic cleaning.

From product choice to complete coverage

What stands out across the lineup is how clearly each part of the range is defined. The Sora series works as the core of the system, offering a progression from an entry-level upgrade to broader, more complete coverage with less manual work. The AquaSense X builds on that with advanced automation and intelligent performance, delivering a more premium, low-intervention pool care experience. The iSkim extends this further by handling continuous surface cleaning, reducing the need for manual skimming and keeping the pool consistently ready.

Taken together, this creates a more complete pool care setup rather than a set of standalone products. Whether it is everyday cleaning, full coverage, or 24/7 surface maintenance, each layer addresses a different part of the same problem, making the overall experience more seamless and low-maintenance.

Advertisement

With Anniversary pricing available from May 9 to May 25, this becomes an opportunity to move into smarter pool care at the right level. From a practical, cordless pool cleaning upgrade to a more advanced, hands-free system, the range makes it easier to choose based on what your pool actually needs, and act on it while the offers are still in place.

Source link

Advertisement
Continue Reading

Tech

Challenging The Way We Pedal

Published

on

The bicycle is an invention that has not changed in its fundamentals since the first recognisably modern machines appeared in the closing years of the 19th century. Its frame uses a structure of two triangles, its wheels are equal in size, and it’s propelled by a pedal crank and (in most cases) a chain. Bicycles have improved vastly in materials and performance, but if you were to wheel a 2026 tourer into an 1886 bike shop, the Victorian proprietor would recognise it. Only a very brave engineer would try to fundamentally change such a formula, but here’s [Not programming] with a crankless bicycle.

The idea is to replace the crank’s circular motion with a linear one, thus providing a more constant propulsion. The build was inspired by another that used a sinusoidal track in a rotating cylinder to achieve the necessary conversion. This design takes a different tack, using an arrangement of gears and freewheels he describes as a mechanical rectifier to convert the back-and-forth motion of pedaling into rotation. The pedals themselves are stirrups mounted at each end of a V-belt.

This build is an exercise in pushing the limits of 3D print strength, as prototype after prototype shears under load. He does finally get the thing to work, though, and we admire his persistence. Oddly, this isn’t the first 3D-printed bicycle geartrain we’ve seen.

Advertisement

Source link

Advertisement
Continue Reading

Tech

I Reached Out To The White House Counterterrorism Czar For Comment. He Lashed Out On X.

Published

on

from the seems-very-stable dept

This story was originally published by ProPublica. Republished under a CC BY-NC-ND 3.0 license.

Counterterrorism czar Sebastian Gorka is one of the most controversial figures in the Trump administration, a gate crasher in the buttoned-up world of national security. 

In a field where quiet professionalism is revered, Gorka is loud and mercurial. With a booming, British-accented voice, he describes U.S. operations turning suspected terrorists into “red mist” and stacking bodies “like cordwood.” He wears a lanyard inscribed with “WWFY & WWKY,” referencing a line from President Donald Trump: “We will find you and we will kill you.”

Advertisement

It is a testament to the frenzy of Trump’s first year back in office that even the colorful Gorka had faded into the background as the nation reeled from a mass deportation campaign and sweeping cuts to federal agencies. That changed this February with the launch of the U.S.-Israeli war on Iran, which heightened the risk of retaliatory attacks on American citizens and interests around the world. Overnight, there was renewed interest in who leads White House counterterrorism efforts.

My editors and I decided it was time to break out the Gorka files. For six months, I had monitored Gorka’s public remarks for clues about the status of his long-promised national counterterrorism strategy and updates on deadly U.S. strikes in Africa and the Middle East. It had started as old-fashioned beat reporting; I cover counterterrorism, and he’s the senior director for counterterrorism at the National Security Council.

The trove of details I collected from months of Gorka’s public statements, along with interviews with more than two dozen current and former security officials, were woven into a ProPublica investigation published in April. It’s an in-depth look at Gorka and his role in the hollowed-out national security apparatus after a year of leadership turmoil and personnel loss as Trump shifted resources toward his immigration agenda.

ProPublica reached out to Gorka for comment in multiple ways. He never responded, instead lashing out at me via posts on X before the story published. He told his 1.8 million followers that I was anti-American and accused me of writing a “putrid piece of hackery.”

Advertisement

There went my hopes for a good-faith exchange. After discussion with my editors, ProPublica decided to note the insults in the story. It was another revealing layer to the combustible leader Trump had installed in a sensitive national security role. A former senior official noted the eruption was “Gorka being Gorka.”

Increasingly, journalists are pushing back against attacks on our credibility by “showing the work,” guiding readers through the reporting process to dispel myths and foster transparency. In that spirit, I wanted to take this opportunity to show how basic beat reporting — fact-checking the assertions of a powerful figure — led to a broader story about the state of the U.S. counterterrorism mission at a critical moment.

I’ve covered the post-9/11 counterterrorism apparatus for more than two decades, so Gorka was a familiar presence, an academic known mainly for a well-documented hostility toward Islam, which he has portrayed as inherently violent. Gorka has dismissed criticism of this portrayal as “absurd,” saying his focus is “the war inside Islam” between radicals and Western-aligned Muslim leaders. He also served as an adviser under the first Trump administration but was ousted after just seven months amid White House infighting. 

At the time, dozens of lawmakers had demanded his resignation, and investigative outlets detailed links — which Gorka denies — to the Hungarian far right. After the bruising exit, Gorka waited patiently as the Republican Party swung harder right in the Biden era and eventually returned Trump to office.

Advertisement

Gorka was appointed White House counterterrorism czar — he called it his dream job — in a new era without the “adults in the room,” as some officials referred to the more moderate advisers around Trump in the first term. Privately, national security personnel expressed alarm that intelligence about threats was in the hands of an official who reportedly struggled to get security clearance in the first Trump administration.

To me, Gorka was a weather vane for the administration’s national security thinking: Would his “war on terror” mindset clash with the more isolationist “America First” camp that wanted no more forever wars? How would a vast security apparatus built for the Islamist militant threat reorient toward a new focus on far-left “antifa” militants and Latin American drug cartels newly designated as terrorist organizations?

I was especially interested in the status of a national counterterrorism strategy Gorka had been promising since taking office; such documents typically lay out an administration’s approach to fighting the most urgent threats. Though Gorka had described his plan as “imminent” and “on the cusp” of release, months ticked by without any sign of it.

To glean clues about the strategy, I made it my mission to watch every news appearance, read every interview and listen to every podcast featuring Gorka since December 2024, the month before he entered the White House. It took some digging — he rails against the mainstream news media and prefers to appear (largely unchallenged) on niche pro-Trump news outlets and at conservative think tanks.

Advertisement

I developed a nightly ritual. After dinner with my family, I’d hole up to listen to Gorka, hunting for the scraps of news buried in his over-the-top vocabulary and graphic storytelling. Alongside my note categories for “Trump Anecdotes” and “Militant Death Tolls” was one for “Big Words.” For example, the president calls Joe Biden “sleepy”; Gorka prefers “somnambulant.”

Weeks into the reporting, in February 2026, I realized Gorka’s speech had burrowed into my brain when I watched a silly video and thought, in his voice, “Preposterous!” It was time for a break.

I reread my notes from hours of listening sessions. I interviewed counterterrorism analysts and national security watchdog groups about Gorka and his remit. Veteran national security personnel added context and analysis. Just as my editors and I were discussing how to turn the findings into a story, the Iran war began and the spotlight on Gorka grew brighter.

Much of the material on air strikes and the dismantling of guardrails was first incorporated into a story I reported about the Pentagon moving away from more robust civilian protections, a reversal highlighted by a deadly U.S. attack on a girls’ school in Iran. Other reporting ended up in the story about Gorka’s phoenixlike return to the White House and what it says about the Trump counterterrorism doctrine.

Advertisement

Gorka didn’t respond to requests for comment beyond the hostile posts on X. When I asked the White House for comment, spokesperson Anna Kelly praised Gorka’s “incredible job” but sidestepped questions about his approach. “Anyone attempting to smear him and the President’s national security team is only revealing that they haven’t been paying attention for the past year,” Kelly wrote, “as anyone with eyes can see that our homeland is more secure than ever.” 

As of writing, exactly two months into the Iran war, Gorka’s counterterrorism strategy has yet to appear.

Filed Under: counter terrorism, donald trump, journalism, national security, seb gorka

Advertisement

Source link

Continue Reading

Tech

The 19 Most Exciting Cars at the Beijing Auto Show 2026

Published

on

While major motor shows in Europe and the United States are being forced to downsize or change their format, those in China continue to expand.

With 1,451 vehicles on display, including 181 world premieres, the 2026 Beijing International Automotive Exhibition 2026 (also known as Auto China 2026) has become the largest auto show in history—and that’s in terms of both exhibition space and the number of vehicles on display.

This fact itself reflects a shift in the center of gravity of the automotive industry, but that’s not all. A much larger structural transformation is actually taking place in China today.

Previously, the focus was on low-priced electric vehicle models, but now price is no longer the primary point of competition. At the show, not only were there many high-end EVs and large SUVs from Chinese manufacturers equipped with advanced driver-assistance technologies and AI functions, but these technologies are also rapidly spreading to the lower price range.

Advertisement

Chinese manufacturers’ cars offer many technologically impressive features. Lidar sensors, which use lasers for advanced driver assistance, are now even being incorporated into EVs costing less than 100,000 yuan (approximately $14,500). Models featuring “drive-by-wire” technology, which replaces mechanical steering connections and hydraulic brake lines with electrical signals, are appearing prominently. Even Toyota’s local models are using Huawei’s powertrains and smart cockpit OS.

The simplistic dichotomy of “cheap Chinese cars versus high-end European cars” no longer holds weight. While staying competitive in the low-price market, Chinese manufacturers are also gaining leadership in areas such as AI, driver-assistance systems, in-car chips, smart cockpits, and high-performance EVs.

These 19 particularly noteworthy models from the 2026 Beijing Motor Show best embody this evolution.

XPeng GX

Image may contain Machine Spoke Alloy Wheel Car Car Wheel Tire Transportation Vehicle and Wheel

Courtesy of Xpeng

There is a fundamental difference between a car designed for autonomous driving and an existing car that’s had autonomous driving technology added to it. XPeng Motors’ GX is the former, a model in which sensors, computing infrastructure, and AI models with Level 4 autonomous driving in mind were designed first, then built into a new SUV bound for the commercial market.

Advertisement

Equipped with up to four proprietary AI chips, it boasts a total computing power of 3,000 tera operations per second—approximately 12 times the 254 TOPS that a single Nvidia Orin is capable of. The latest AI model in the vehicle can recognize spoken commands as well as the images captured by car’s cameras, and it can understanding and adapt to the current driving conditions.

Volkswagen has adopted XPeng’s AI chip and driver-assistance technology in its EVs, meaning XPeng is no longer just an EV manufacturer. It’s becoming a platform provider supplying the brains behind autonomous driving to Europe’s largest automaker. The price is 399,800 yuan (approximately $58,000).

Source link

Advertisement
Continue Reading

Tech

Google's Fitbit Air is a Whoop-like screen-less fitness tracker built around AI

Published

on


The Fitbit Air is a compact module that fits into a range of bands, with its sensors pressed against the skin and no display on top. That means no notifications, no tapping, and no swiping – just passive tracking. With nothing to interact with, the device is easier to wear…
Read Entire Article
Source link

Continue Reading

Tech

With Denuvo Completely Defeated, 2K Turns To Annoying Online Check In Requirement

Published

on

from the bad-to-worse dept

Ah, Denuvo. It’s been several years since we checked in on this once vaunted DRM tool that billed itself as undefeatable. The end of PC gaming piracy was said to be at hand, at least for any title using Denuvo. Then, predictably, the cracking community saw the target the company had put on its own tool and got to work. They were first able to crack games using Denuvo in months, which turned into weeks, which turned into days, which eventually turned into it being cracked essentially on a game’s launch day.

So, how’s it been going for Denuvo since? Well, it’s essentially been rendered completely useless at this point.

As recently reported by Tom’s Hardware, on April 27, a large Reddit thread tracking which games using Denvuo DRM still needed to be cracked or bypassed officially hit zero. (This list tracks games that don’t require an online server connection, not MMORPGs and other games that do.)  What that means, effectively, is that according to Denuvo modders and hackers, the DRM tech is no longer able to stop pirates from downloading and installing games for free. This milestone for hackers is largely thanks to the MKDev collective and modder DenuvOwO. It was these people who created the hypervisor-based bypass (HVB) that installs a kernel-level driver to bypass Denuvo’s DRM checks.

Technically, Denuvo is still in the game, but it isn’t functioning as it should, and pirates can play without paying. And there is already some evidence that bypassing Denuvo has led to performance improvements in titles like Resident Evil Requiem, which might push some people to use the bypass even if they bought the game legally. We saw this in a previous Resident Evil game when hackers bypassed Denuvo in 2021.

This is always the life cycle of DRM in video games. Whatever audacious claims a DRM company might want to make early on with its product, the technology is eventually defeated to one degree or another and all that is left are the byproducts of the DRM that serve to do nothing other than annoy legitimate customers of a video game. If the technology is so intrusively bad that even legit buyers of a game want to crack it out of their games, and the pirates are completely unencumbered by it as well, then it’s a wonder why anyone would bother including it in their games to begin with.

Advertisement

DRM is pretty much always bad. The desire to protect a game from pirates is understandable, but ultimately pointless. There is almost never enough benefit in terms of generating more sales by trying to fight piracy to be worth pissing off your actual paying customers. And tactics such as what publisher 2K has decided to do in the wake of Denuvo’s complete failure aren’t any better.

2K Games has apparently begun adding 14-day online check-ins to some of its PC games. The check-in has apparently been added to NBA 2K25, NBA 2K26, and Marvel’s Midnight Suns. These games now reportedly use a “fixed offline authorization token” that expires after two weeks. Once that happens, the game will not be playable until you connect to the internet and let the game ping Denvuo to get a new token. Pirat Nation and hackers are claiming this new countdown isn’t properly disclosed on the games’ Steam Store page or in each title’s respective EULA.

I’ll just add that pushing this new requirement out via an update to existing purchases is also a problem. Customers bought these games with the understanding of how they would work or not when offline. 2K suddenly changing the product in a meaningful way after it had already been purchased is a flatly anti-consumer move.

And I have no doubt that this online check requirement will be defeated by the same folks who defeated Denuvo. This arms race continues, but it shouldn’t. Why not focus on making great games and connecting with your paying customers to give them reasons to actually pay instead?

Filed Under: check ins, denuvo, drm, video games

Companies: 2k, denuvo

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025