Connect with us

Tech

Maker Creates Robot That Looks Just Like a Spool of Filament for 3D Printers

Published

on

3D Printer Spool Filament Robot
A spool of filament rests calmly on a shelf, looking exactly like the usual orange Prusament roll found in numerous 3D printers, yet it hides a little secret. Prusa wanted a one-of-a-kind gift and asked Matt Denton to transform a regular 2kg spool of filament into an out-of-the-ordinary remote-controlled robot dubbed SpoolBot, which you’d be hard-pressed to tell is actually a robot going for a little roll on its own power.



Denton started from scratch with a genuine Prusament spool and simply retained the outward appearance the same, which means it still has the orange filament wrapped around a center drum in a sloppy, but highly realistic pattern. One side of the spool is fixed, but the other side detaches with magnets to allow you to go in and make changes. Those black plastic ends are actually the pieces that move the spool; they are the driving wheels. Inside, everything is held together by an internal frame that prevents the entire structure from looking out of place.


Bambu Lab A1 3D Printer, Support Multi-Color 3D Printing, High Speed & Precision, Full-Auto Calibration…
  • High-Speed Precision: Experience unparalleled speed and precision with the Bambu Lab A1 3D Printer. With an impressive acceleration of 10,000 mm/s…
  • Multi-Color Printing with AMS lite: Unlock your creativity with vibrant and multi-colored 3D prints. The Bambu Lab A1 3D printers make multi-color…
  • Full-Auto Calibration: Say goodbye to manual calibration hassles. The A1 3D printer takes care of all the calibration processes automatically…

The whole device is powered by two geared DC motors from Pololu, each equipped with an encoder that tells the bot exactly what it’s doing and causes the drive wheels at the spool’s edges to turn. Batteries are put low to function as a counterbalance, ensuring that the entire assembly remains upright when the spool spins around its center. A DFRobot Romeo Mini ESP32-C3 board handles all of the control, and it works in tandem with a BNO085 IMU sensor to keep an eye on things and ensure the spool remains upright and stable. An RC receiver links to a simple handheld controller.

3D Printer Spool Filament Robot
Movement is accomplished by a technique known as differential drive; in order to travel in a straight path, both motors must be moving at the same speed; however, varying the speeds results in pleasant smooth bends. The IMU monitors how far up and down and side to side it moves, and if it becomes unsteady, it slows down the motors to keep the whole thing from toppling over. Then there’s the gyro feedback, which effectively keeps the spool on track even when it’s on an uneven surface like a carpet. Several operating modes are available, allowing you to choose how the bot behaves. For example, you can keep it sitting in one location, have it follow a trail back to where it began, cruise along at a given speed, or even perform some beautiful spins at full or half speed.

3D Printer Spool Filament Robot
The entire assembly required a lot of meticulous planning and engineering to get right; as you can see, the outer shell must be able to spin freely on the center hub, so it’s all about getting the bearings perfect so it travels smoothly. The insides of the spool are made up of 3D-printed elements in black PETG and orange PLA, which are held in place by a variety of ingenious components such as heat-set inlays and precision bolts. The wiring is neatly tucked away so that it does not interfere with the movement of the various components. It took around five weeks to complete the project, from designing it in CAD to printing the pieces and writing the code.

3D Printer Spool Filament Robot
Matt Denton chose to be generous and share all of the build files and code with the world. You can get them on Printables and on GitHub, and there is even a video guide that shows you how to create one yourself, from installing the bearings to wrapping the filament around the central drum and calibrating the controller. Of course, like any decent robot, SpoolBot includes a couple googly eyes and a small indication to give it some personality as it rolls around the floor.
[Source]

Advertisement

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

How to Watch Netflix’s ‘America’s Next Top Model’ Docuseries

Published

on

A new three-part Netflix docuseries will cover the chaos and complicated legacy of the hit reality series, America’s Next Top Model. 

ANTM premiered in 2003 and ran for 24 seasons, helping launch the careers of contestants like Eva Marcille, Lio Tipton and Yaya DaCosta. Netflix’s synopsis describes the new doc as the definitive chronicle of the modeling competition, which “became a pop-culture juggernaut defined by explosive drama, public meltdowns and controversies that still fuel viral moments today.” 

Advertisement

Former contestants, judges and producers — including host and creator Tyra Banks — took part in Netflix’s series, Reality Check, which you can stream shortly. 

When to watch Reality Check: Inside America’s Next Top Model on Netflix

Netflix will drop its three-episode doc on the modeling competition series in the early morning hours on Monday, Feb. 16 (3 a.m. ET, to be exact).

Like many other streaming services, Netflix’s cheapest tier is ad-supported, and you can opt for a pricier tier to avoid commercials. You can subscribe to Standard with ads for $8 per month, Standard for $18 per month or Premium for $25 per month.

Advertisement

James Martin/CNET

For ad-free streaming and access to every title Netflix offers, you should opt for the streamer’s Standard or Premium tiers. The Standard with ads tier comes with some limits on what you can watch due to licensing restrictions. Netflix’s website lets you compare the simultaneous streams, downloads and extra member slots you get with each tier.

Advertisement

Source link

Continue Reading

Tech

Solid-State EV Batteries Just Got One Step Closer To American Roads

Published

on





Just about every modern electric vehicle on American roads is powered by one of three battery types: lithium-iron phosphate (the most common, also known as LFP), nickel-manganese cobalt (NMC), and nickel-cobalt aluminum (NCA). Each of these is a relatively mature and well-understood system, with each holding certain advantages — LFP batteries are cheap and stable, whereas NCA batteries are energy-dense and powerful. But these EVs have only really been commonplace on today’s roads for the past two decades or so, a comparatively small amount of time when measured against the common internal combustion engine’s history spanning almost 140 years. Technology advances at an ever-increasing pace, and we may be on the precipice of that next evolution — at least on American roads.

Enter the solid-state battery, a pioneering technology that promises to combine all the benefits of the aforementioned configurations into a single entity. High performance, excellent energy density, potentially lasting many years, and stable thermal conductivity, though it comes at a steep cost — one that Karma Automotive appears to be willing to pay. As of February 2026, Karma Automotive announced plans to ship the first mass-production vehicle powered by solid-state batteries stateside, equipped with Factorial FEST SSBs.

Advertisement

Karma Automotive is the only American ultra-luxury manufacturer offering a diverse portfolio of vehicles, a specialized firm dedicated to producing EVs deep into six-figure USD territory. The company currently fields six distinct models, but only one will receive the solid-state battery at first: the Kaveya super coupe, scheduled for a 2027 debut. Let’s dive in and explore more about the car and solid-state batteries, along with what the technology promises to accomplish.

Advertisement

How solid-state batteries work

First thing’s first: what is a solid-state battery and how does it differ from most other EV battery types? In short, the typical EV battery houses two poles on either side, the anode and cathode — positive and negative, respectively. In between these is an ion that’s constantly shifting from the positive to negative side, like a relay runner, going from one electrolyte solution to the other. There are several types of these batteries, the most common of which is lithium-ion, but they all use a sort of gel-like electrolyte. Solid-state batteries, or SSBs for short, use a solid electrolyte instead, providing a more stable and energy-dense solution to power storage.

There are several variants of SSBs in service; the one Karma Automotive is testing is actually known as a quasi-solid-state battery. Produced by Factorial Energy, the quasi-SSB design prioritizes a combination of thermal stability (quasi-SSBs are inherently far less flammable than standard lithium-ion batteries) and high energy density, which translates to double the range. The company website cites range figures of at least 500 miles for the next generation of EV while weighing roughly one third less, based on the typical 90 kWh battery. Factorial also lists the Solstice SSB as a potential candidate for future EVs alongside the FEST quasi-SSB.

With standard battery technology fully matured, the current consensus is that SSBs represent the next technological leap forward for battery technology. Implementing such designs in cars holds a number of benefits: lighter vehicles with higher ranges, greater battery longevity, and greater power. However, because it’s still an emerging technology as far as EVs go, costs are currently prohibitively expensive for regular mass-production cars in the United States, and so you still can’t buy them for any U.S.-sold EV — yet.

Advertisement

The Karma Kaveya

As for the car itself, the Karma Kaveya is a sleek, ultra modern super coupe designed with a high-end grand tourer aesthetic. The name “Kaveya” is Sanskrit, meaning “power in motion,” a theme present in the promised statistics — Karma claims the high-end coupe to be capable of 0-60 times in less than 3 seconds and speeds in excess of 180 mph, thanks to its 1,000 hp powertrain. All of that is speculative for now, of course — especially given the emergent nature of the battery it houses.

According to the official figures listed on Karma’s website, the battery boasts a HV120 kWh output for a grand total of 1,270 lb-ft combined available torque, coupled with a 10-80% charging time of about 45 minutes. This contrasts an earlier estimate by Stellantis, which announced a partnership with Factorial back in April 2025 to use the batteries in Dodge demonstration vehicles to promote SSB technology; their figures listed an estimated charging time of 18 minutes from 15-90%.

Advertisement

Regardless of the battery’s performance now, it’ll likely exceed that of even the most advanced mass-production standard battery pack, albeit for a steep cost. But Karma isn’t in the business of cheap vehicles, so it’s a model that suits the company well. With the Kaveya representing the current cutting-edge of EV technology, Karma looks poised to leave a definitive mark in the ongoing electric arms race no matter what happens.



Advertisement

Source link

Continue Reading

Tech

Nvidia, Groq and the limestone race to real-time AI: Why enterprises win or lose here

Published

on

​From miles away across the desert, the Great Pyramid looks like a perfect, smooth geometry — a sleek triangle pointing to the stars. Stand at the base, however, and the illusion of smoothness vanishes. You see massive, jagged blocks of limestone. It is not a slope; it is a staircase.

​Remember this the next time you hear futurists talking about exponential growth.

​Intel’s co-founder Gordon Moore (Moore’s Law) is famously quoted for saying in 1965 that the transistor count on a microchip would double every year. Another Intel executive, David House, later revised this statement to “compute power doubling every 18 months.” For a while, Intel’s CPUs were the poster child of this law. That is, until the growth in CPU performance flattened out like a block of limestone.

​If you zoom out, though, the next limestone block was already there — the growth in compute merely shifted from CPUs to the world of GPUs. Jensen Huang, Nvidia’s CEO, played a long game and came out a strong winner, building his own stepping stones initially with gaming, then computer visioniand recently, generative AI.

Advertisement

​The illusion of smooth growth

​Technology growth is full of sprints and plateaus, and gen AI is not immune. The current wave is driven by transformer architecture. To quote Anthropic’s President and co-founder Dario Amodei: “The exponential continues until it doesn’t. And every year we’ve been like, ‘Well, this can’t possibly be the case that things will continue on the exponential’ — and then every year it has.”

​But just as the CPU plateaued and GPUs took the lead, we are seeing signs that LLM growth is shifting paradigms again. For example, late in 2024, DeepSeek surprised the world by training a world-class model on an impossibly small budget, in part by using the MoE technique.

​Do you remember where you recently saw this technique mentioned? Nvidia’s Rubin press release: The technology includes “…the latest generations of Nvidia NVLink interconnect technology… to accelerate agentic AI, advanced reasoning and massive-scale MoE model inference at up to 10x lower cost per token.”

​Jensen knows that achieving that coveted exponential growth in compute doesn’t come from pure brute force anymore. Sometimes you need to shift the architecture entirely to place the next stepping stone.

Advertisement

​The latency crisis: Where Groq fits in

​This long introduction brings us to Groq.

​The biggest gains in AI reasoning capabilities in 2025 were driven by “inference time compute” — or, in lay terms, “letting the model think for a longer period of time.” But time is money. Consumers and businesses do not like waiting.

​Groq comes into play here with its lightning-speed inference. If you bring together the architectural efficiency of models like DeepSeek and the sheer throughput of Groq, you get frontier intelligence at your fingertips. By executing inference faster, you can “out-reason” competitive models, offering a “smarter” system to customers without the penalty of lag.

​From universal chip to inference optimization

​For the last decade, the GPU has been the universal hammer for every AI nail. You use H100s to train the model; you use H100s (or trimmed-down versions) to run the model. But as models shift toward “System 2” thinking — where the AI reasons, self-corrects and iterates before answering — the computational workload changes.

Advertisement

​Training requires massive parallel brute force. Inference, especially for reasoning models, requires faster sequential processing. It must generate tokens instantly to facilitate complex chains of thought without the user waiting minutes for an answer. ​Groq’s LPU (Language Processing Unit) architecture removes the memory bandwidth bottleneck that plagues GPUs during small-batch inference, delivering lightning-fast inference.

​The engine for the next wave of growth

​For the C-Suite, this potential convergence solves the “thinking time” latency crisis. Consider the expectations from AI agents: We want them to autonomously book flights, code entire apps and research legal precedent. To do this reliably, a model might need to generate 10,000 internal “thought tokens” to verify its own work before it outputs a single word to the user.

  • On a standard GPU: 10,000 thought tokens might take 20 to 40 seconds. The user gets bored and leaves.

  • On Groq: That same chain of thought happens in less than 2 seconds.

​If Nvidia integrates Groq’s technology, they solve the “waiting for the robot to think” problem. They preserve the magic of AI. Just as they moved from rendering pixels (gaming) to rendering intelligence (gen AI), they would now move to rendering reasoning in real-time.

​Furthermore, this creates a formidable software moat. Groq’s biggest hurdle has always been the software stack; Nvidia’s biggest asset is CUDA. If Nvidia wraps its ecosystem around Groq’s hardware, they effectively dig a moat so wide that competitors cannot cross it. They would offer the universal platform: The best environment to train and the most efficient environment to run (Groq/LPU).

Advertisement

Consider what happens when you couple that raw inference power with a next-generation open source model (like the rumored DeepSeek 4): You get an offering that would rival today’s frontier models in cost, performance and speed. That opens up opportunities for Nvidia, from directly entering the inference business with its own cloud offering, to continuing to power a growing number of exponentially growing customers.

​The next step on the pyramid

​Returning to our opening metaphor: The “exponential” growth of AI is not a smooth line of raw FLOPs; it is a staircase of bottlenecks being smashed.

  • Block 1: We couldn’t calculate fast enough. Solution: The GPU.

  • Block 2: We couldn’t train deep enough. Solution: Transformer architecture.

  • Block 3: We can’t “think” fast enough. Solution: Groq’s LPU.

​Jensen Huang has never been afraid to cannibalize his own product lines to own the future. By validating Groq, Nvidia wouldn’t just be buying a faster chip; they would be bringing next-generation intelligence to the masses.

Andrew Filev, founder and CEO of Zencoder

Advertisement

Welcome to the VentureBeat community!

Our guest posting program is where technical experts share insights and provide neutral, non-vested deep dives on AI, data infrastructure, cybersecurity and other cutting-edge technologies shaping the future of enterprise.

Read more from our guest post program — and check out our guidelines if you’re interested in contributing an article of your own!

Source link

Advertisement
Continue Reading

Tech

Hideki Sato, known as the father of Sega hardware, has reportedly died

Published

on

Hideki Sato, who led the design of Sega’s beloved consoles from the ’80s and ’90s, died on Friday, according to the Japanese gaming site Beep21. He was 77. Sato worked with Sega from 1971 until the early 2000s, but he’s best known for his involvement in the development of the Sega arcade games and home consoles that defined many late Gen X and early millennial childhoods, starting with the SG-1000 to the Genesis, Saturn and Dreamcast.

Sato went on to serve as Sega’s president from 2001 to 2003. In the post announcing his death, Beep21, which interviewed Sato numerous times over the years, wrote (translated from Japanese), “He was truly a great figure who shaped Japanese gaming history and captivated Sega fans all around the world. The excitement and pioneering spirit of that era will remain forever in the hearts and memories of countless fans, for all eternity.” Sato’s passing comes just a few months after that of Sega co-founder David Rosen, who died in December at age 95. 

Source link

Continue Reading

Tech

OpenClaw creator Peter Steinberger joins OpenAI

Published

on

Peter Steinberger, who created the AI personal assistant now known as OpenClaw, has joined OpenAI.

Previously known as Clawdbot, then Moltbot, OpenClaw achieved viral popularity over the past few weeks with its promise to be the “AI that actually does things,” whether that’s managing your calendar, booking flights, or even joining a social network full of other AI assistants. (The name changed the first time after Anthropic threatened legal action over its similarity to Claude, then changed again because Steinberger liked the new name better.)

In a blog post announcing his decision to join OpenAI, the Austrian developer said that while he might have been able to turn OpenClaw into a huge company, “It’s not really exciting for me.”

“What I want is to change the world, not build a large company[,] and teaming up with OpenAI is the fastest way to bring this to everyone,” Steinberger said.

Advertisement

OpenAI CEO Sam Altman posted on X that in his new role, Steinberger will “drive the next generation of personal agents.” As for OpenClaw, Altman said it will “live in a foundation as an open source project that OpenAI will continue to support”

Source link

Continue Reading

Tech

Researchers turn Edison's 1879 light bulb into a mini graphene reactor

Published

on


Graphene is a two-dimensional lattice of carbon atoms arranged in a hexagonal pattern, renowned for its exceptional electrical conductivity, thermal transport, and mechanical strength. Turbostratic graphene is a stacked variant in which the layers are rotated and misaligned, weakening interlayer coupling and making the material easier to process at scale.
Read Entire Article
Source link

Continue Reading

Tech

Software Development On The Nintendo Famicom In Family BASIC

Published

on

Back in the 1980s, your options for writing your own code and games were rather more limited than today. This also mostly depended on what home computer you could get your hands on, which was a market that — at least in Japan — Nintendo was very happy to slide into with their ‘Nintendo Family Computer’, or ‘Famicom’ for short. With the available peripherals, including a tape deck and keyboard, you could actually create a fairly decent home computer, as demonstrated by [Throaty Mumbo] in a recent video.

After a lengthy unboxing of the new-in-box components, we move on to the highlight of the show, the HVC-007 Family BASIC package, which includes a cartridge and the keyboard. The latter of these connects to the Famicom’s expansion port. Inside the package, you also find a big Family BASIC manual that includes sprites and code to copy. Of course, everything is in Japanese, so [Throaty] had to wrestle his way through the translations.

The cassette tape is used to save applications, with the BASIC package also including a tape with the Sample 3 application, which is used in the video to demonstrate loading software from tape on the Famicom. Although [Throaty] unfortunately didn’t sit down to type over the code for the sample listings in the manual, it does provide an interesting glimpse at the all-Nintendo family computer that the rest of the world never got to enjoy.

Advertisement

Source link

Advertisement
Continue Reading

Tech

Google Docs can turn long documents into audio summaries in latest Workspace update

Published

on


The new feature will roll out across Google Workspace over the next two weeks. It will appear under Tools > Audio > Listen to document summary, where users can trigger a small media player to control playback. The summaries, typically under three minutes, draw on information from multiple document tabs…
Read Entire Article
Source link

Continue Reading

Tech

Longtime NPR host David Greene sues Google over NotebookLM voice

Published

on

David Greene, the longtime host of NPR’s “Morning Edition,” is suing Google, alleging that the male podcast voice in the company’s NotebookLM tool is based on Greene, according to The Washington Post.

Greene said that after friends, family members, and coworkers began emailing him about the resemblance, he became convinced that the voice was replicating his cadence, intonation, and use of filler words like “uh.”

“My voice is, like, the most important part of who I am,” said Greene, who currently hosts the KCRW show “Left, Right, & Center.”

Among other features, Google’s NotebookLM allows users to generate a podcast with AI hosts. A company spokesperson told the Post that the voice used in this product is unrelated to Greene’s: “The sound of the male voice in NotebookLM’s Audio Overviews is based on a paid professional actor Google hired.”

Advertisement

This isn’t the first dispute over AI voices resembling real people. In one notable example, OpenAI removed a ChatGPT voice after actress Scarlett Johansson complained that it was an imitation of her own.

Source link

Continue Reading

Tech

Today’s NYT Wordle Hints, Answer and Help for Feb. 16 #1703

Published

on

Looking for the most recent Wordle answer? Click here for today’s Wordle hints, as well as our daily answers and hints for The New York Times Mini Crossword, Connections, Connections: Sports Edition and Strands puzzles.


Today’s Wordle puzzle is a bit tricky, with its double letter and unusual letters. If you need a new starter word, check out our list of which letters show up the most in English words. If you need hints and the answer, read on.

Read more: New Study Reveals Wordle’s Top 10 Toughest Words of 2025

Advertisement

Today’s Wordle hints

Before we show you today’s Wordle answer, we’ll give you some hints. If you don’t want a spoiler, look away now.

Wordle hint No. 1: Repeats

Today’s Wordle answer has one repeated letter.

Wordle hint No. 2: Vowels

Today’s Wordle answer has one vowel, but it’s the repeated letter, so you’ll see it twice.

Wordle hint No. 3: First letter

Today’s Wordle answer begins with R.

Advertisement

Wordle hint No. 4: Last letter

Today’s Wordle answer ends with T.

Wordle hint No. 5: Meaning

Today’s Wordle answer can refer to a place where birds settle or congregate.

TODAY’S WORDLE ANSWER

Today’s Wordle answer is ROOST.

Yesterday’s Wordle answer

Advertisement

Yesterday’s Wordle answer, Feb. 15, No. 1702 was SKULL.

Recent Wordle answers

Feb. 11, No. 1698: VEGAN

Feb. 12, No. 1699: SURGE

Feb. 13, No. 1700: MOOCH

Advertisement

Feb. 14, No. 1701: BLOOM

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025