Connect with us

Tech

Gemini can now create audio summaries of your Google Docs

Published

on

Google is introducing Gemini-powered Audio Summaries to Google Docs, letting you listen to a shorter AI-generated overview of your document on the go. Think of it as turning long-form documents, such as product manuals, minutes of the meeting, or a quarterly business report, into an audio-only podcast.

Audio Summaries generate a short, natural-language recap of the document (typically a couple of minutes long) that retains the key ideas; it is based on Google Gemini.

Your Google doc, now in podcast form

It offers the gist of the entire document in audio format. You can listen to it while completing your steps on a treadmill, preparing breakfast, or simply walking to your class or office.

The Audio Summaries feature is available in Google Docs on the web, under Tools > Audio > Listen to document summary. Once you select the feature, Gemini scans the file and produces a narrated summary, along with a “Listen to this tab” option that reads the entire page for when you don’t want to miss the details.

Google says the voices sound natural (similar to the Gemini AI assistant). You can also control the playback speed to suit your needs. You also control standard controls like play, pause, and timeline scrubbing, as well as different voice styles, so that you can personalize the document-turned-podcast yourself.

Advertisement

Availability, productivity boost, and a catch

The Audio Summaries feature is rolling out gradually. However, the feature isn’t available for free users; it is currently limited to paid tiers, including the Workspace Business (Standard and Plus) and Enterprise plans (Standard and Plus), as well as certain AI add-ons (including Google AI Ultra and AI Pro).

Audio Summaries could really help enhance your productivity, especially if you’re a multitasker or have accessibility needs. However, be careful when using the feature, as summarizing the document might miss some important details.

Source link

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Apple set to unveil new iPhone, MacBooks, iPads at March 4 event

Published

on


Most expect to see Apple introduce a handful of new products at the event including the iPhone 17e. The entry-level iPhone will likely be a dialed back variant of the standard iPhone 17, but with enough bells and whistles to drum up consumer interest. The handset is said to be…
Read Entire Article
Source link

Continue Reading

Tech

Pentagon may sever Anthropic relationship over AI safeguards – Claude maker expresses concerns over ‘hard limits around fully autonomous weapons and mass domestic surveillance’

Published

on


  • The Pentagon and Anthropic are in a standoff over usage of Claude
  • The AI model was reportedly used to capture Nicolás Maduro
  • Anthropic refuses to let its models be used in “fully autonomous weapons and mass domestic surveillance”

A rift between the Pentagon and several AI companies has emerged over how their models can be used as part of operations.

The Pentagon has requested AI providers Anthropic, OpenAI, Google, and xAI to allow the use of their models for “all lawful purposes”.

Source link

Advertisement
Continue Reading

Tech

How Ricursive Intelligence raised $335M at a $4B valuation in 4 months

Published

on

The co-founders of startup Ricursive Intelligence seemed destined to be co-founders.

Anna Goldie, CEO, and Azalia Mirhoseini, CTO, are so well-known in the AI community that they were among those AI engineers who “got those weird emails from Zuckerberg making crazy offers to us,” Goldie told TechCrunch, chuckling. (They didn’t take the offers.) The pair worked at Google Brain together and were early employees at Anthropic.

They earned acclaim at Google by creating the Alpha Chip — an AI tool that could generate solid chip layouts in hours — a process that normally takes human designers a year or more. The tool helped design three generations of Google’s Tensor Processing Units.

That pedigree explains why, just four months after launching Ricursive, they last month announced a $300 million Series A round at a $4 billion valuation led by Lightspeed, just a couple of months after raising a $35 million seed round led by Sequoia.

Advertisement

Ricursive is building AI tools that design chips, not the chips themselves. That makes them fundamentally different from nearly every other AI chip startup: they’re not a wannabe Nvidia competitor. In fact, Nvidia is an investor. The GPU giant, along with AMD, Intel, and every other chip maker, are the startup’s target customers.

“We want to enable any chip, like a custom chip or a more traditional chip, any kind of chip, to be built in an automated and very accelerated way. We’re using AI to do that,” Mirhoseini told TechCrunch. 

Their paths first crossed at Stanford, where Goldie earned her PhD as Mirhoseini taught computer science classes. Since then, their careers have been in lockstep. “We started at Google Brain on the same day. We left Google Brain on the same day. We joined Anthropic on the same day. We left Anthropic on the same day. We rejoined Google on the same day, and then we left Google again on the same day. Then we started this company together on the same day,” Goldie recounted.

Techcrunch event

Advertisement

Boston, MA
|
June 23, 2026

During their time at Google, the colleagues were so close they even worked out together, both enjoying circuit training. The pun wasn’t lost on Jeff Dean, the famed Google engineer who was their collaborator. He nicknamed their Alpha Chip project “chip circuit training” — a play on their shared workout routine. Internally, the pair also got a nickname: A&A. 

Advertisement

The Alpha Chip earned them industry notice, but it also attracted controversy. In 2022, one of their colleagues at Google was fired, Wired reported, after he spent years trying to discredit A&A and their chip work, even though that work was used to help produce some of Google’s most important, bet-the-business AI chips.

Their Alpha Chip project at Google Brain proved the concept that would become Ricursive — using AI to dramatically accelerate chip design.

Designing chips is hard

The issue is, computer chips have millions to billions of logic gate components integrated on their silicon wafer. Human designers can spend a year or more placing those components on the chip to ensure performance, good power utilization and any other design needs. Digitally determining the placement of such infinitesimally small components with precision is, as you might expect, hard. 

Alpha Chip “could generate a very high-quality layout in, like, six hours. And the cool thing about this approach was that it actually learns from experience,” Goldie said. 

Advertisement

The premise of their AI chip design work is to use “a reward signal” that rates how good the design is. The agent then takes that rating to “update the parameters of its deep neural network to get better,” Goldie said. After completing thousands of designs, the agent got really good. It also got faster as it learned, the founders say.

Ricursive’s platform will take the concept further. The AI chip designer they are building will “learn across different chips,” Goldie said. So each chip it designs should help it become a better designer for every next chip.

Ricursive’s platform also makes use of LLMs and will handle everything from component placement through design verification. Any company that makes electronics and needs chips is their target customer.

If their platform proves itself, as it seems likely to do, Ricursive could play a role in the moonshot goal of achieving artificial general intelligence (AGI). Indeed, their ultimate vision is designing AI chips, meaning the AI will essentially design its own computer brains. 

Advertisement

“Chips are the fuel for AI,” Goldie said. “I think by building more powerful chips, that’s the best way to advance that frontier.” 

Mirhoseini adds that the lengthy chip-design process is constraining how quickly AI can advance. “We think we can also enable this fast co-evolution of the models and the chips that basically power them,” she said. So AI can grow smarter faster. 

If the thought of AI designing its own brains at ever increasing speeds brings visions of Skynet and the Terminator to mind, the founders point out that there’s a more positive, immediate and, they think, more likely benefit: hardware efficiency.  

When AI Labs can design far more efficient chips (and, eventually all the underlying hardware), their growth won’t have to consume so much of the world’s resources. 

Advertisement

“We could design a computer architecture that’s uniquely suited to that model, and we could achieve almost a 10x improvement in performance per total cost of ownership,” Goldie said. 

While the young startup won’t name its early customers, the founders say that they’ve heard from every big chip making name you can imagine. Unsurprisingly, they have their pick of their first development partners, too. 

Source link

Advertisement
Continue Reading

Tech

The Vatican introduces an AI-assisted live translation service

Published

on

The Vatican is leaning into AI. AI-assisted live translations are being introduced for Holy Mass attendees — the holy masses if you will. The Papal Basilica of Saint Peter in the Vatican has teamed up with Translated, a language service provider, to create live translations in 60 languages.

“Saint Peter’s Basilica has, for centuries, welcomed the faithful from every nation and tongue. In making available a tool that helps many to understand the words of the liturgy, we wish to serve the mission that defines the centre of the Catholic Church, universal by its very vocation,” Cardinal Mauro Gambetti, O.F.M. Conv., Archpriest of the Papal Basilica of Saint Peter in the Vatican, said in a statement. “I am very happy with the collaboration with Translated. In this centenary year, we look to the future with prudence and discernment, confident that human ingenuity, when guided by faith, may become an instrument of communion.”

Visitors to the Vatican will have the option to scan a QR code. They will then have access to live audio and text translations of the liturgy. It doesn’t require an app and should work right on a web page.

The technology stems from Lara, a translation AI tool Translated launched in 2024. Translated claims that Lara works with the “sensitivity of over 500,000 native-speaking professional translators.”

Advertisement

Source link

Continue Reading

Tech

Spokane startup Blaze Barrier heats up with new funding for quick-deploy wildfire defense system

Published

on

Members of the Blaze Barrier team, from left: Jacob Schuler, founder and CEO; Jennifer Fanto, chief operating officer; and Cody Schuler, head of production and safety. (Blaze Barrier Photo)

Jacob Schuler is not a firefighter. But in 2021 he heard from a friend who was first on scene to a barn in flames in Stevens County, Wash. The friend described the technique firefighters use to slow or contain a brush fire when there is no access to water.

“Standard operating procedure is to grab shovels and start digging a fire line,” Schuler told GeekWire. “It removes the vegetation, and when the fire gets there it’s supposed to put out the fire because it runs out of fuel.”

That day flames were too fast for the diggers and the blaze raced into a neighboring field and off it went, Schuler said. The 30-day Ford-Corkscrew Fire burned 16,000 acres and 18 homes were lost.

“Hearing that story, that when the water is gone they grab shovels — that was a problem statement for me,” Schuler said, and he set out to find a solution.

Spokane-based Blaze Barrier was born out of Schuler’s desire to give firefighters and homeowners a quick-acting tool to fight wildfires. The technology works by connecting a series of modules which contain monoammonium phosphate, a non-toxic extinguishing powder. When fire reaches the line’s fuses, the modules ignite and knock down the flames while also creating a fire-suppressing barrier to stop the fire’s progress.

Advertisement

“It’s like a fire line in a box instead of the manual labor of digging the vegetation away,” Schuler said. The line is fast and easy to deploy from its storage box, the powder is biodegradable, and unused lines or modules can be picked up and reused.

Blaze Barrier modules are connected to one another in a 25-foot line and ignite when flames reach the fuses that feed into each module full of fire extinguishing powder. (Blaze Barrier Photo)

Blaze Barrier is appropriate for certain types of wildfires and grass fires. It’s not intended to work against a massive blaze fed by powerful winds, like those that overpower firetrucks or jump between tree tops.

“We hear pretty consistently from firefighters, that giving them an extra 5-10 minutes or slowing the intensity of a fire is game-changing for them,” Schuler said. “It allows them to get into better position so they’re not being overtaken.”

Blaze Barrier recently closed a $760,000 seed funding round, with Avista Development and Barton Ventures co-leading the round and participation from 12 angel investors. The company previously raised a seed round of $300,000, and a Kickstarter campaign raised about $53,000.

The startup employs six people and is actively hiring for a 9,500-square-foot production facility where it hopes to eventually assemble 1,000 fire lines a day.

Advertisement

A 25-foot Blaze Barrier sells on the company’s website for $295. A patent is pending for the system Schuler created in which the modules are strung together. And the company just got sign-off from the U.S. Department of Transportation to ship via common carrier.

The video below, showing a previous iteration of Blaze Barrier, illustrates how the system is deployed and ignites:

Source link

Advertisement
Continue Reading

Tech

Real LED TVs Are Finally Becoming A Thing

Published

on

Once upon a time, the cathode ray tube was pretty much the only type of display you’d find in a consumer television. As the analog broadcast world shifted to digital, we saw the rise of plasma displays and LCDs, which offered greater resolution and much slimmer packaging. Then there was the so-called LED TV, confusingly named—for it was merely an LCD display with an LED backlight. The LEDs were merely lamps, with the liquid crystal doing all the work of displaying an image.

Today, however, we are seeing the rise of true LED displays. Sadly, decades of confusing marketing messages have polluted the terminology, making it a confusing space for the modern television enthusiast. Today, we’ll explore how these displays work and disambiguate what they’re being called in the marketplace.

The Rise Of Emissive Displays

When it comes to our computer monitors and televisions, most of us have got used to the concept of backlit LCD displays. These use a bright white backlight to actually emit light, which is then filtered by the liquid crystal array into all the different colored pixels that make up the image. It’s an effective way to build a display, with a serious limitation on contrast ratio because the LCD is only so good at blocking out light coming from behind. Over time, these displays have become more sophisticated, with manufacturers ditching cold-cathode tube backlights for LEDs, before then innovating with technologies that would vary the brightness of parts of the LED backlight to improve contrast somewhat. Some companies even started using arrays of colored LEDs in their backlights for further control, with the technology often referred to as “RGB mini LED” or “micro RGB.” This still involves an LCD panel in front of the backlight, limiting contrast ratios and response times.

The holy grail, though, would be to ditch the liquid crystal entirely, and just have a display fully made of individually addressable LEDs making up the red, green, and blue subpixels. That is finally coming to pass, with manufacturers launching new television lines under the “Micro LED” name. These are true “emissive” displays, where the individual red, blue, and green subpixels are themselves emitting light, not just filtering it from a backlight source behind them.

Advertisement
The challenge behind making pure LED TVs was figuring out how to get the LEDs small enough and to put them in scalable arrays. Credit: Samsung

These displays promise greater contrast than backlit LCDs, because individual pixels can be turned completely off to create blacker blacks. Response times are also fast because LEDs switch on and off much more quickly than liquid crystals can react. They’re also relatively power efficient, as there’s no need to supply electrons to pixels that are off. Contrast this to LCDs, which are always spending power on turning some pixels black in front of a  glowing backlight which is also drawing power. Viewing angles of emissive displays are also top-notch. Inorganic LEDs also have long lifetimes, which makes them far more desirable than OLED displays (discussed further below). Their high brightness also makes them ideal for us in bright conditions, particularly where sunlight is concerned.

Given the many boons of this technology, you might question why it’s taken true LED displays this long to hit the market. The ultimate answer comes down to cost and manufacturability. If you’ve ever built your own LED array, you’ve probably noted the engineering challenges in reducing pixel size and increasing resolution. When it comes to producing a 4K display, you’re talking about laying down 8,294,400 individual RGB LEDs, all of which need to work flawlessly and be small enough to not show up as individually visible pixels from typical viewing ranges. Other technologies like LCDs and OLEDs have the benefit that they can be easily produced with lithographic techniques in great sizes, but the technology to produce pure LED displays on this scale is only just coming into fruition.

There are very few Micro LED TVs on the market right now. The price is why. Credit: Best Buy via screenshot

You can purchase an all-LED TV today, if you so desire. Just note that you’ll pay through the nose for it. Few models are on the market, but Best Buy will sell you a 114″ Micro LED set from Samsung for the charming price of $149,999.99. If that’s a bit big for your house, condo, or apartment, you might consider the 89″ model for a more acceptable $109,999.99. Meanwhile, LG has demonstrated a 136″ model of a micro LED TV, but there have been no concrete plans to bring it to market. Expect it to land somewhere firmly in the six-figure range, too.

If you’re not feeling so flush, you can get a lesser “Micro RGB” TV if you like, which combines a fancy RGB matrix backlight with LCD technology as discussed above. Even then, a Samsung R95 television with Micro RGB technology will set you back $29,999.99 at Best Buy, or you can purchase it on a payment plan for $1,250 a month. In fact, with the launch of these comparatively affordable TVs, Samsung has gone somewhat quiet on its Micro LED line since initially crowing about it in 2024. Still, whichever way you go, these fancy TVs don’t come cheap.

But What About OLED?

OLEDs have many benefits as an emissive display technology, however the organic materials used come with limits to brightness and lifespan. Fabrication cost is, however, far cheaper than pure inorganic LED displays. Credit: author

It’s true that emissive LED displays have existed in the market for some time, but not using traditional light-emitting diodes. These are the popular “OLED” displays, with the acronym standing for “organic light emitting diode.” Unlike standard LEDs, which use inorganic semiconductor crystals to emit light, OLEDs instead use special organic compounds in a substrate between electrodes, which emit light when electricity is applied. They can readily be fabricated in large arrays to create displays, which are used in everything from tiny smartwatches to full-sized televisions.

You might question why the advent of “proper” LED displays is noteworthy given that OLED technology has been around for some time. The problem is that OLEDs are somewhat limited in their performance versus traditional inorganic LEDs. The main area in which they suffer is longevity, as the organic compounds are susceptible to degradation over time. The brightness of individual pixels in an OLED display tends to drop off very quickly compared to inorganic LEDs. A display can diminish to half of its original brightness in just a few years of moderate to heavy use. In particular, blue OLED subpixels tend to degrade faster than red or green subpixels, forcing manufacturers to take measures to account for this over the lifetime of a display. Peak brightness is also somewhat limited, which can make OLED displays less attractive for use in bright rooms with lots of natural light. Dark spots and burn in are also possible, at rates greater than those seen in contemporary LCD displays.

The limitations of OLED displays have not stopped them gaining a strong position in the TV marketplace. However, the technology will be unlikely to beat true LED displays in terms of outright image quality, brightness, and performance. Cost will still be a factor, and OLEDs (and LCDs) will still be relevant for a long time to come. However, for now at least, the pure LED display promises to become the prime choice for those looking for a premium viewing experience at any cost.

Advertisement

Featured image: “Micro LED” displays. Credit: Samsung

Source link

Advertisement
Continue Reading

Tech

Infostealer malware found stealing OpenClaw secrets for first time

Published

on

OpenClaw

With the massive adoption of the OpenClaw agentic AI assistant, information-stealing malware has been spotted stealing files associated with the framework that contain API keys, authentication tokens, and other secrets.

OpenClaw (formerly ClawdBot and MoltBot) is a local-running AI agent framework that maintains a persistent configuration and memory environment on the user’s machine. The tool can access local files, log in to email and communication apps on the host, and interact with online services.

Since its release, OpenClaw has seen widespread adoption worldwide, with users using it to help manage everyday tasks and act as an AI assistant.

Wiz

However, there has been concern that, given its popularity, threat actors may begin targeting the framework’s configuration files, which contain authentication secrets used by the AI agent to access cloud-based services and AI platforms.

Infostealer spotted stealing OpenClaw files

Hudson Rock says they have documented the first in-the-wild instance of infostealers stealing files associated with OpenClaw to extract secrets stored within them.

Advertisement

“Hudson Rock has now detected a live infection where an infostealer successfully exfiltrated a victim’s OpenClaw configuration environment,” reads the report.

“This finding marks a significant milestone in the evolution of infostealer behavior: the transition from stealing browser credentials to harvesting the ‘souls’ and identities of personal AI agents.”

HudsonRock had predicted this development since late last month, calling OpenClaw “the new primary target for infostealers” due to the highly sensitive data the agents handle and their relatively lax security posture.

Alon Gal, co-founder and CTO of Hudson Rock, told BleepingComputer that it is believed to be a variant of the Vidar infostealer, with the data stolen on February 13, 2026, when the infection took place.

Advertisement

Gal said the infostealer does not appear to target OpenClaw specifically, but instead executes a broad file-stealing routine that scans for sensitive files and directories containing keywords like “token” and “private key.”

As the files in the “.openclaw” configuration directory contained these keywords and others, they were stolen by the malware.

The OpenClaw files stolen by the malware are:

  • openclaw.json – Exposed the victim’s redacted email, workspace path, and a high-entropy gateway authentication token, which could enable remote connection to a local OpenClaw instance (if exposed) or client impersonation in authenticated requests.
  • device.json – Contained both publicKeyPem and privateKeyPem used for pairing and signing. With the private key, an attacker could sign messages as the victim’s device, potentially bypass “Safe Device” checks, and access encrypted logs or cloud services paired with the device.
  • soul.md and memory files (AGENTS.md, MEMORY.md) – Define the agent’s behavior and store persistent contextual data, including daily activity logs, private messages, and calendar events.
Openclaw.json (left) and soul.md (right)
Openclaw.json (left) and soul.md (right)
Source: HudsonRock

HudsonRock’s AI analysis tool concluded that the stolen data is enough to potentially enable a full compromise of the victim’s digital identity.

The researchers comment that they expect information stealers to continue focusing on OpenClaw as the tool becomes increasingly integrated into professional workflows, incorporating more targeted mechanisms for AI agents.

Advertisement

Meanwhile, Tenable discovered a max-severity flaw in nanobot, an ultra-lightweight personal AI assistant inspired by OpenClaw, that could potentially allow remote attackers to hijack WhatsApp sessions via exposed instances fully.

Nanobot, released two weeks ago, already has 20k stars and over 3k forks on GitHub. The team behind the project released fixes for the flaw, tracked under CVE-2026-2577, in version 0.13.post7.

Modern IT infrastructure moves faster than manual workflows can handle.

In this new Tines guide, learn how your team can reduce hidden manual delays, improve reliability through automated response, and build and scale intelligent workflows on top of tools you already use.

Source link

Advertisement
Continue Reading

Tech

Why Hart Tools Are Being Discontinued

Published

on





Home improvement can be tough for DIYers working on a budget, as even the smallest household jobs can get very expensive, very fast. It’s important to save money, especially on power tools, which is why many people use the Hart brand. But unfortunately, this affordable line of tools is being discontinued by parent company Techtronic Industries Limited (TTI), which is shifting its focus to other core brands.

TTI revealed in its announcement that it plans on keeping Hart in its family of brands. However, there was no indication from the company on what exactly that means moving forward. TTI also did not confirm Hart tool profits were down but did state that demand is up for Milwaukee and Ryobi, two other popular brands owned by the Chinese company. TTI Chief Executive Officer Steven Philip Richman said in the announcement that the company had managed to stay strong during a challenging economic period. “The discontinuation of the HART business further supports our ability to deliver our medium-term internal profitability objectives,” Richman remarked.

Hart tools were sold exclusively at Walmart, and as of this writing, inventory is getting low on several items. Some tools and accessories are also now listed as “out of stock,” and the same may begin happening in stores as well. TTI’s official announcement was made via the Hong Kong Stock Exchange’s Issuer Information Service on December 11, 2025. 

Advertisement

Hart’s history and Walmart’s other tool option

Hart Tools was originally founded as a California-based company in 1983. The company started out small, focusing mainly on framing hammers. But that eventually led to the creation of other tools, including axes, chisels, and wedges. Eventually, Hart expanded its lineup into a fully realized hand tool and power tool brand. Hart was later sold to Techtronic Industries Company Limited (TTI) in 2007 and by 2019, Hart had become an exclusive brand sold at Walmart.

There’s been no word on whether or not another tool brand will fill the void left by Hart. However, Hart customers could try Hyper-Tough tools, a brand you might not realize is owned by Walmart. Like Hart, Hyper-Tough is made for DIYers with an extensive line that includes a wide variety of hand tools, power tools, and other equipment. It’s a budget-friendly brand with many tools selling at prices that are comparable to Hart Tools.

Advertisement

The Hyper-Tough brand has other benefits as well, including a 20V battery platform that allows batteries to be shared between select tools. Hyper-Tough also offers brushless variants of some tools that deliver more power and better performance. Plus, you can also get replacement parts for some outdoor equipment either in-store or online.



Advertisement

Source link

Continue Reading

Tech

Alan DeKok’s Path From Physics to Network Security

Published

on

When Alan DeKok began a side project in network security, he didn’t expect to start a 27-year career. In fact, he didn’t initially set out to work in computing at all.

DeKok studied nuclear physics before making the switch to a part of network computing that is foundational but—like nuclear physics—largely invisible to those not directly involved in the field. Eventually, a project he started as a hobby became a full-time job: maintaining one of the primary systems that helps keep the internet secure.

Alan DeKok

Employer

Advertisement

InkBridge Networks

Occupation

CEO

Education

Advertisement

Bachelor’s degree in physics, Carleton University; master’s degree in physics, Carleton University

Today, he leads the FreeRADIUS Project, which he cofounded in the late 1990s to develop what is now the most widely used Remote Authentication Dial-In User Service (RADIUS) software. FreeRADIUS is an open-source server that provides back-end authentication for most major internet service providers. It’s used by global financial institutions, Wi-Fi services like Eduroam, and Fortune 50 companies. DeKok is also CEO of InkBridge Networks, which maintains the server and provides support for the companies that use it.

Reflecting on nearly three decades of experience leading FreeRADIUS, DeKok says he became an expert in remote authentication “almost by accident,” and the key to his career has largely been luck. “I really believe that it’s preparing yourself for luck, being open to it, and having the skills to capitalize on it.”

From Farming to Physics

DeKok grew up on a farm outside of Ottawa growing strawberries and raspberries. “Sitting on a tractor in the heat is not particularly interesting,” says DeKok, who was more interested in working with 8-bit computers than crops. As a student at Carleton University, in Ottawa, he found his way to physics because he was interested in math but preferred the practicality of science.

Advertisement

While pursuing a master’s degree in physics, also at Carleton, he worked on a water-purification system for the Sudbury Neutrino Observatory, an underground observatory then being built at the bottom of a nickel mine. He would wake up at 4:30 in the morning to drive up to the site, descend 2 kilometers, then enter one of the world’s deepest clean-room facilities to work on the project. The system managed to achieve one atom of impurity per cubic meter of water, “which is pretty insane,” DeKok says.

But after his master’s degree, DeKok decided to take a different route. Although he found nuclear physics interesting, he says he didn’t see it as his life’s work. Meanwhile, the Ph.D. students he knew were “fanatical about physics.” He had kept up his computing skills through his education, which involved plenty of programming, and decided to look for jobs at computing companies. “I was out of physics. That was it.”

Still, physics taught him valuable lessons. For one, “You have to understand the big picture,” DeKok says. “The ability to tell the big-picture story in standards, for example, is extremely important.” This skill helps DeKok explain to standards bodies how a protocol acts as one link in the entire chain of events that needs to occur when a user wants to access the internet.

He also learned that “methods are more important than knowledge.” It’s easy to look up information, but physics taught DeKok how to break down a problem into manageable pieces to come up with a solution. “When I was eventually working in the industry, the techniques that came naturally to me, coming out of physics, didn’t seem to be taught as well to the people I knew in engineering,” he says. “I could catch up very quickly.”

Advertisement

Founding FreeRADIUS

In 1996, DeKok was hired as a software developer at a company called Gandalf, which made equipment for ISDN, a precursor to broadband that enabled digital transmission of data over telephone lines. Gandalf went under about a year later, and he joined CryptoCard, a company providing hardware devices for two-factor authentication.

While at CryptoCard, DeKok began spending more time working with a RADIUS server. When users want to connect to a network, RADIUS acts as a gatekeeper and verifies their identity and password, determines what they can access, and tracks sessions. DeKok moved on to a new company in 1999, but he didn’t want to lose the networking skills he had developed. No other open-source RADIUS servers were being actively developed at the time, and he saw a gap in the market.

The same year, he started FreeRADIUS in his free time and it “gradually took over my life,” DeKok says. He continued to work on the open-source software as a hobby for several years while bouncing around companies in California and France. “Almost by accident, I became one of the more senior people in the space. Then I doubled down on that and started the business.” He founded NetworkRADIUS (now called InkBridge Networks) in 2008.

By that point, FreeRADIUS was already being used by 100 million people daily. The company now employs experts in Canada, France, and the United Kingdom who work together to support FreeRADIUS. “I’d say at least half of the people in the world get on the internet by being authenticated through my software,” DeKok estimates. He attributes that growth largely to the software being open source. Initially a way to enter the market with little funding, going open source has allowed FreeRADIUS to compete with bigger companies as an industry-leading product.

Advertisement

Although the software is critical for maintaining secure networks, most people aren’t aware of it because it works behind the scenes. DeKok is often met with surprise that it’s still in use. He compares RADIUS to a building foundation: “You need it, but you never think about it until there’s a crack in it.”

27 Years of Fixes

Over the years, DeKok has maintained FreeRADIUS by continually making small fixes. Like using a ratcheting tool to make a change inch by inch, “you shouldn’t underestimate that ratchet effect of tiny little fixes that add up over time,” he says.

He’s seen the project through minor patches and more significant fixes, like when researchers exposed a widespread vulnerability DeKok had been trying to fix since 1998. He also watched a would-be successor to the network protocol, Diameter, rise and fall in popularity in the 2000s and 2010s. (Diameter gained traction in mobile applications but has gradually been phased out in the shift to 5G.) Though Diameter offers improvements, RADIUS is far simpler and already widely implemented, giving it an edge, DeKok explains.

And he remains confident about its future. “People ask me, ‘What’s next for RADIUS?’ I don’t see it dying.” Estimating that billions of dollars of equipment run RADIUS, he says, “It’s never going to go away.”

Advertisement

About his own career, DeKok says he plans to keep working on FreeRADIUS, exploring new markets and products. “I never expected to have a company and a lot of people working for me, my name on all kinds of standards, and customers all over the world. But it worked out that way.”

This article appears in the March 2026 print issue as “Alan DeKok.”

From Your Site Articles

Related Articles Around the Web

Advertisement

Source link

Continue Reading

Tech

How Volunteers Saved A Victorian-Era Pumping Station From Demolition

Published

on

D-engine of the Claymills Pumping Station. (Credit: John M)
D-engine of the Claymills Pumping Station. (Credit: John M)

Although infrastructure like a 19th-century pumping station generally tends to be quietly decommissioned and demolished, sometimes you get enough people looking at such an object and wondering whether maybe it’d be worth preserving. Such was the case with the Claymills Pumping Station in Staffordshire, England. After starting operations in the late 19th century, the pumping station was in active use until 1971. In a recent documentary by the Claymills Pumping Station Trust, as the start of their YouTube channel, the derelict state of the station at the time is covered, as well as its long and arduous recovery since they acquired the site in 1993.

After its decommissioning, the station was eventually scheduled for demolition. Many parts had by that time been removed for display elsewhere, discarded, or outright stolen for the copper and brass. Of the four Woolf compounding rotative beam engines, units A and B had been shut down first and used for spare parts to keep the remaining units going. Along with groundwater intrusion and a decaying roof, it was in a sorry state after decades of neglect. Restoring it was a monumental task.

The inventor of the compounding beam engine, Arthur Woolf, was a Cornish engineer who had figured out how to make this more efficient steam engine work. While his engineering made pumping stations like these possible, the many workers and their families ensured that they kept working smoothly. Although firmly obsolete in the 21st century, pumping stations like these are excellent examples of all the engineering and ingenuity that got us to where we are today, and preserving them is the best way to retain all this knowledge and the memories associated with them.

For that reason, one can really congratulate the volunteers who turned this piece of history into a museum. It features a static display of the restored machinery. If you want to see it running, there are seven demonstrations of the station operating under steam every year, during which the six-story tall machinery can be observed in all its glory.

Top image: Claymills Pumping Station in 2010. (Credit: Ashley Dace)

Advertisement

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025