Connect with us
DAPA Banner
DAPA Coin
DAPA
COIN PAYMENT ASSET
PRIVACY · BLOCKDAG · HOMOMORPHIC ENCRYPTION · RUST
ElGamal Encrypted MINE DAPA
🚫 GENESIS SOLD OUT
DAPAPAY COMING

Tech

Kodiak AI raises $100M at a steep discount, sending its stock tumbling 37%

Published

on

Kodiak AI’s stock tumbled 37% in after-hours trading Thursday after the self-driving truck startup disclosed it had raised $100 million by selling shares at a steep discount — a sign that investors were willing to back the company but not at its current market price.

The company sold shares at $6.50 each, well below its closing price of $9.10, according to a filing with the Securities and Exchange Commission (SEC). The raise also included warrants — instruments that give investors the right to buy additional shares later at a set price, in this case as low as $6.

The financing came from existing backer Ares Management and several unnamed institutional investors.

The influx of capital comes Kodiak pushes forward on the expensive task of scaling its self-driving trucks business, which covers off-road industrial settings and public highways, with the ultimate goal of eventually spending less than it earns. Kodiak reported revenue of $1.8 million in the first quarter, up from the $1.4 million it logged in same period a year prior. The company’s loss from operations was $37.8 million, twice what it reported in the same period last year.

Advertisement

Those numbers help explain why the discount terms rattled investors. The company is burning cash fast, and the raise — while sizable — does little to change that math in the near term.

Kodiak has made some recent progress on the business front, including a new commercial contract with Roehl Transport, a pilot program to test Kodiak-equipped autonomous trucks at West Fraser Timber Co.’s log-hauling operations in Alberta, Canada, and a collaboration with the military vehicle maker General Dynamics Land Systems to create autonomous ground vehicles for defense applications.

Under the deal with Roehl, which was also announced Thursday, Kodiak-equipped trucks will autonomously haul freight between Dallas and Houston on four round trips per week. The trucks operate autonomously on the entirety of the trip, but Kodiak keeps a human safety operator behind the wheel as a precaution.

Techcrunch event

Advertisement

San Francisco, CA
|
October 13-15, 2026

Kodiak founder and CEO Don Burnette said the company is on track to move to driverless trucking on public highways later this year as it ramps up operations.

Advertisement

“We have tons of over-the-road long haul initiatives, and bringing on new partners continues to show momentum,” he said in an interview. “We’re excited about the progress that we’re making as we march toward our driverless launch later this year.”

For now, Kodiak owns the trucks, provides the safety driver, and carries the freight for Roehl along with its other existing on-highway customers, which include Werner, J.B. Hunt, Bridgestone, Martin Brower, and C.R. England. But that arrangement will change once it goes to driverless trucking operations.

“Our intention is to not own the trucks at that point [but to] operate our driver-as-a-service model, where [customers] own and operate the trucks,” Burnette said. He added that this is the system it uses with its off-highway customer Atlas for its driverless deployment in the Permian Basin of Texas.

While Kodiak plans to pull the safety driver by the end of 2026, Burnette said it won’t start driverless operations on public highways until it has finished validating the technology.

Advertisement

“It’s already operating under all of the conditions that we expect to launch driverless, but there’s a lot of validation work that we need to do, and that’s where we bring in our autonomy readiness measure,” Burnette said, describing the initiative — released Thursday — as a zero-to-100 score tracking how much of Kodiak’s internal safety validation is complete. As of April, Kodiak was at 86%, Burnette said.

The company, which was previously called Kodiak Robotics, went public in September via a merger with special-purpose acquisition company Ares Acquisition Corporation II, an affiliate of Ares Management. The deal valued the startup at about $2.5 billion. 

At the time, Kodiak raised $275 million in financing. More than $212.5 million came from certain institutional investors, including $145 million in PIPE funding (Private Investment in Public Equity, a method by which investors purchase shares directly from a public company) and about $62.9 million in trust cash from Ares. That trust cash shrank from its initial $562 million as some SPAC investors redeemed their shares — a standard provision that lets SPAC investors recover their money before a merger closes.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

Advertisement

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

How Sakana trained a 7B model to orchestrate GPT-5, Claude Sonnet 4 and Gemini 2.5 Pro

Published

on

Every LangChain pipeline your team hardcodes starts breaking the moment the query distribution shifts — and it always shifts. That bottleneck is what Sakana AI set out to eliminate.

Researchers at Sakana AI have introduced the “RL Conductor,” a small language model trained via reinforcement learning to automatically orchestrate a diverse pool of worker LLMs. Conductor dynamically analyzes inputs, distributes labor among workers, and coordinates among agents.

This automated coordination achieves state-of-the-art results on difficult reasoning and coding benchmarks, outperforming individual frontier models like GPT-5 and Claude Sonnet 4 as well as expensive human-designed multi-agent pipelines. It achieves this performance at a fraction of the cost and with fewer API calls than competitors. RL Conductor is the backbone of Fugu, Sakana AI’s commercial multi-agent orchestration service.

The limitations of manual agentic frameworks

Large language models have strong latent capabilities. But tapping these capabilities to their fullest is a great challenge. Extracting this level of performance relies heavily on manually designed agentic workflows, which serve as critical components in commercial AI products. 

Advertisement

However, these frameworks fall short because they are inherently rigid and constrained. In comments to VentureBeat, Yujin Tang, co-author of the paper, explained the exact breaking point of current systems: “While using frameworks with hard-coded pipelines like LangChain and Mixture-of-Agents can work well for specific use cases … In production, an inherent bottleneck arises when targeting domains with large user bases with very heterogeneous demands.” 

Tang noted that achieving “real-world generalization in such heterogeneous applications inherently necessitates going beyond human-hardcoded designs.”

Another bottleneck for building robust agentic systems is that no single model is optimal for all tasks. Different models are fine-tuned to specialize in distinct domains. One model might excel at scientific reasoning, while another is superior at code generation, mathematical logic, or high-level planning. 

Because models have these varying characteristics and complementary skills, manually predicting and hard-coding the ideal combination of models for every query is practically impossible. An optimal agentic framework should be able to analyze a problem and delegate subtasks to the most suitable expert in the pool.

Advertisement

Conducting an orchestra of agents

The RL Conductor is designed to overcome the limitations of rigid, human-designed frameworks. As the name implies, it conducts an orchestra of agents by dividing challenging problems, delegating targeted subtasks, and designing communication topologies for a set of worker LLMs. 

Instead of relying on fixed code or static routing, the Conductor orchestrates these models by generating a customized workflow. For each step in the workflow, the model generates a natural language instruction for a specific aspect of the task, assigns an agent to carry it out, and defines an “access list” that dictates which past subtasks and responses from other agents are included in that agent’s context.

By defining everything in natural language, the Conductor builds flexible workflows tailored to each input. It can construct simple sequential chains, parallel tree structures, or even recursive loops depending on the problem’s demands. 

Screenshot 2026-05-07 at 6.07.40 PM

RL Conductor (source: Sakana AI)

Advertisement

Importantly, the model learns these strategies not by human design but through reinforcement learning (RL) and reward maximization. During training, the model is given a task, a pool of workers, and a reward signal based on whether its answer and output format are correct.

Through a simple trial-and-error RL algorithm, the model organically discovers which combinations of instructions and communication structures yield the highest reward. As a result, it automatically adopts advanced orchestration strategies such as targeted prompt engineering, iterative refinement, and meta-prompt optimization. 

The model learns to dynamically adjust its strategies and leverage the distinct strengths of its worker agents without any human developer having to hard-code the process.

Conductor in action

To test RL Conductor in action, the researchers fine-tuned the 7-billion parameter Qwen2.5-7B using the framework. During training, the Conductor was tasked with designing agentic workflows of up to five steps. It was given access to a worker pool containing seven different models: three closed-source giants (Gemini 2.5 Pro, Claude-Sonnet-4, and GPT-5) and four open-source models (including DeepSeek-R1-Distill-Qwen-32B, Gemma3-27B, and Qwen3-32B).

Advertisement

The team evaluated the Conductor across a variety of highly challenging benchmarks, comparing it against individual frontier models acting alone, self-reflection agents prompted iteratively to improve their own answers, and state-of-the-art multi-agent routing frameworks like MASRouter, Mixture-of-Agents (MoA), RouterDC, and Smoothie. The small 7B Conductor set new benchmarks across the board. It achieved an average score of 77.27% across all tasks, hitting 93.3% on the AIME25 math benchmark, 87.5% on GPQA-Diamond, and 83.93% on LiveCodeBench, according to the researchers.

Remarkably, it achieved these marks while remaining highly efficient. While baseline models like MoA burned through 11,203 tokens per question, the Conductor used an average of just 1,820 tokens, taking an average of only three steps per workflow.

rl-conductor-performance

RL Conductor outperforms other baselines on key industry benchmarks (source: arXiv)

A closer look at the experimental details shows exactly why the framework is so effective. The Conductor automatically learned to measure task difficulty. For simple factual recall questions, it often solved the problem in a single step or used a basic two-agent setup. However, for complex coding problems, it built extensive workflows involving up to four agents with dedicated planning, implementation, and verification phases.

Advertisement

The Conductor also learned that frontier models have different strengths. To achieve record scores on coding benchmarks, the Conductor frequently assigned Gemini 2.5 Pro and Claude Sonnet 4 to act as high-level planners, and only brought in GPT-5 at the very end to write the final optimized code. In a particularly clever display of adaptability, the Conductor would sometimes completely abdicate its own role, handing the entire planning process over to Gemini 2.5 Pro and allowing it to dictate the subtasks for the rest of the pool.

Beyond math and coding benchmarks, Sakana AI is already putting the underlying architecture to work in front-office utility. “We have been using our Fugu models based on the Conductor technology internally for various practical enterprise applications: software development, deep research, strategy development, and even visual tasks like slide generation,” Tang said.

Bringing orchestration to the enterprise: Sakana Fugu

While the 7B model described in the research paper was an exploratory blueprint and is not publicly available, Sakana AI has productized the Conductor framework into its flagship commercial AI product, Sakana Fugu. Now in its beta phase, Fugu serves as a multi-agent orchestration system accessible through a standard OpenAI-compatible API.

Tang noted Fugu targets “the large market of industries where AI adoption has yet to bring large productivity gains due to the generalization limitations of current hard-coded pipelines, such as finance and defense.”

Advertisement

For enterprise developers, this allows seamless integration into existing applications without the headache of managing multiple API keys or manually routing tasks across different vendors. Behind the API interface, Fugu automates complex collaboration topologies and role assignments across a pool of models. To support varying business needs, Sakana released two variants: Fugu Mini, built for low-latency operations, and Fugu Ultra, designed for maximum performance on demanding workloads.

Addressing governance concerns around autonomous agents spinning up invisible workflows, Tang pointed out that the interpretability risks are functionally similar to the hidden reasoning traces of current top-tier closed APIs, and the system is managed with established guardrails to minimize hallucinations. 

For enterprise architects weighing when to deploy RL-orchestration versus traditional routing, the decision often comes down to engineering resources. “We believe the absolute sweet spot comes whenever users and their teams feel they are spending a disproportionate amount of time guiding their underlying agents,” Tang said. However, he cautioned that the framework isn’t necessary for everything, noting that “it’s hard to beat the economic proposition of a local model running directly on the user’s machine for simple queries.”

As the diversity of specialized open- and closed-source AI models continues to grow, static hardcoded pipelines will inevitably become obsolete. Looking ahead, this dynamic orchestration will likely extend beyond text and code environments. “There is indeed a large potential to fill this gap with cross-modal Conductor frameworks becoming the foundation for more autonomous, self-coordinating physical AI systems,” Tang said.

Advertisement

Source link

Continue Reading

Tech

Screen Time Concerns Lead to Backlash Against Edtech Vetting Process

Published

on

Among the increasing concern about screen time in school comes a new culprit: the vetting process for school software.

A growing group of parents and teachers has spent the last few years fighting against cellphones in the classroom, with some extending that to all digital devices. But the school-issued laptops, and the software accompanying them, have been left largely unscathed.

“A lot of the issues with personal devices can move to the district-issued devices,” said Kim Whitman, co-lead for Smartphone Free Childhood US, in a previous interview with EdSurge. Whitman explained that when students do not have cellphones, they can still message with friends on their Chromebooks, or through tools like Google Docs. “There are definitely issues with school-issued devices as well.”

Proposals in three states – Rhode Island, Utah and Vermont – are now tackling these concerns.

Advertisement

Better Vetting Processes

At the start of this year’s legislative session, all three states concurrently proposed reviewing the vetting process of education software.

In most districts, school boards, IT personnel and administrators choose vendors, often relying on the vendors’ own data to prove the products’ safety and efficacy.

“There is nobody right now that is confirming these products are safe, effective and legal,” Whitman said in a previous interview. “It should not fall on the district’s IT director; it would be impossible for them to do it. And the companies should not be tasked with doing it — that would be like nicotine companies vetting their own cigarettes.”

The proposed legislation is looking to change that.

Advertisement

Vermont

Bill: An act relating to educational technology products

Status: Passed by the House March 27; currently before the Senate Committee on Education

This bill proposes to require that providers of educational technology products register annually with the state. It also requires the secretary of state to create a certification standard and review process for these products before schools can use them.

Any provider of an educational technology product — specifically student-facing tools that are used for teaching and learning in schools — must register with the secretary of state, pay a registration fee of $100 and provide its most up-to-date terms and conditions and privacy policy.

Advertisement

The secretary of state would work with the Vermont Agency of Education to review registrations.

Criteria for certification include:

  • The product’s compliance with state curriculum standards
  • Advantages of using it versus non-digital methods
  • Whether it was explicitly designed for educational purposes
  • Design features, including artificial intelligence, geotracking and targeted advertising

While the initial bill proposed that any edtech provider not certified by the state, but continues to operate, could be liable for fines of $50 a day up to $10,000, that language was struck by the final bill passed from the House.

If passed by the Senate, the bill would go into effect July 1, 2026. By November 2027, the Agency of Education would submit a written report on which state entities should be involved in the edtech certification and any other recommendations for certification.

Utah

Bill: Software in Education

Advertisement

Status: Signed into law on March 18

The bill requires the Utah Board of Education to study the use of software and digital practices in public schools, review best practices and provide guidance for responsible use.

The state also passed a Classroom Technology Amendments bill tackling screen time at every grade level, banning it entirely from kindergarten through third grade, except for computer science and assessments. Middle school students must have their parents “opt-in” to taking devices home and high school students will be allowed to bring home devices unless parents “opt-out.”

“We’re not anti-technology,” Rep. Ariel Defay (R-UT) said in a statement. She is a sponsor of the Classroom Technology Amendments bill. “We just want to ensure that education technology is used intentionally and actually helps students to learn.”

Advertisement

Rhode Island

Bill: The Safe School Technology Act of 2026

Status: Passed by the House April 14; currently in the Senate Education Committee

This bill, proposed by three Rhode Island representatives who are also mothers, is part of a six-bill package focused on protecting children from social media, artificial intelligence and digital platforms.

The Safe School Technology Act bill would be enacted this August if approved, banning software providers from activating or accessing any audio or video functions on a device outside of school-related activities. It also bans the use of location data.

Advertisement

The initial bill lists a litany of concerns that the “lack of regulation” caused, including increased screen time, and “marketing commercial products as educational with no accountability; children being given devices without proof of developmental appropriateness and parents being excluded from decisions about their child’s digital exposure.”

But the main concern, argued by state Representative June Speakman (D-RI), who sponsored the bill, is that a majority of school districts’ technology policies do not have limits on tracking student devices. She added roughly two-thirds of districts also do not limit school-issued device’s ability to activate audio and video.

“Passing this bill will provide clear, consistent protection across all schools in the state that assures students and their families that their devices cannot be used to invade their privacy or track their activities,” Speakman said in a statement.

“They deserve to feel confident that their privacy is protected when they use technology that is required for school,” she added.

Advertisement

Tech Pushback

Several technology proponents have pushed back.

The Software and Information Industry Association spoke out against the Rhode Island bill in March, saying if the bill passed it would make the state be one of the most restrictive in the nation.

In an open letter to Joseph McNamara, chair of the Rhode Island House Education Committee, Abigail Wilson, director of state policy at the Software and Information Industry Association, said the bill “proposes an overly restrictive regulatory framework that will severely disrupt classroom instruction, impose massive unfunded administrative burdens on local schools, and deprive Rhode Island students of critical, evidence-based learning tools.”

Keith Krueger, CEO of the nonprofit Consortium for School Networking, told NBC News that the proposed legislation “does keep me up at night.”

Advertisement

“I think some well-intentioned policymakers … are rushing so quickly that they haven’t thought through the implications,” he said.

Source link

Continue Reading

Tech

A billion-dollar bet on floating data centers pushes AI infrastructure into the open ocean despite harsh realities of waves and corrosion

Published

on


  • Panthalassa’s valuation now sits near $1 billion after fresh funding
  • Peter Thiel led a $140 million investment round into the ocean tech company
  • Investors see ocean energy as a vast, untapped computing resource

A US-based ocean technology company, Panthalassa, is advancing its plan to relocate data processing into open waters, backed by fresh funding that places its valuation near $1 billion.

The start-up has spent ten years developing wave energy technology and is now backed by PayPal co-founder and early Facebook investor Peter Thiel, who led a $140 million investment round into the company.

Source link

Advertisement
Continue Reading

Tech

The Lexus TZ Is A Quieter, Upscale Take On The Highlander EV

Published

on

Earlier this year, Toyota revealed its first three-row electric SUV in the Highlander EV. Now, it’s Lexus’ turn to put its spin on this segment with the upcoming TZ, which boasts a more luxurious design, seating for up to six and a top range of around 300 miles.

Like its cousins the Highlander EV and Subaru Getaway, the TZ is based on Toyota’s e-TNGA platform and will be available with two battery sizes (76.9kWh or 95.8kWh) and an upgraded Direct4 AWD system. While Lexus has yet to provide specific info about power, based on the output available from other models sharing this platform, we’re expecting around 400 horsepower (or more) depending on the exact configuration. It’s a similar situation when it comes to range, because while we’re still waiting on an official figure from the EPA, Lexus estimates a TZ with the larger 95kWh power pack will go for around 300 miles between charges. 

Meanwhile, at 200.8 inches, the TZ is actually slightly longer than the Highlander EV, while sporting a similarly brawny exterior with lots of hard lines along and Lexus’ signature spindle-shaped grille. Other features include Dynamic Rear Steering (up to four degrees) that should provide better maneuverability at low speeds and increased stability at high speeds. Unfortunately, the TZ’s 400-volt architecture doesn’t look very impressive, with charging speeds that top out at just 150kW that should deliver 10 to 80 percent charging times of around 35 minutes. Thankfully, the car does come with a native NACS port and, for times when you need to charge your other gadgets, Lexus is making a dedicated accessory adapter that plugs into an AC inlet in the cargo area.

Advertisement

On the inside, the TZ’s infotainment is centered around a 14-inch main display with a secondary 12.3-inch digital instrument cluster for the driver. Lexus says the TZ will also support a Smart Digital Key+ that allows you to unlock the car with your phone or smartwatch, and will continue to work even if the gadget runs out of battery. Also, aside from the base infotainment system, the TZ supports both Android Auto and Apple CarPlay.

The TZ’s platform and exterior are quite similar to the Highlander EV and Subaru Getaway, so Lexus seems to have really leaned into the EV’s interior as a way to distinguish itself from its rivals. The company claims the TZ has the quietest cabin of any of its SUVs (both EV and ICE) and that quest for muted peace and relaxation seems to have been a core design goal for the vehicle, as Lexus uses the word quiet eight separate times in its official press release. The TZ also features a number of sustainable materials scattered throughout the car including forged bamboo panels, a plant-based UltraSuede and recycled aluminum for components like its roof rails and tonneau cover frame.

Unfortunately, we’re still waiting for official info regarding the TZ’s pricing and availability, configurations and trim levels, which Lexus plans to release closer to the EV’s on sale date sometime later this year.

Source link

Advertisement
Continue Reading

Tech

Allen Institute for AI launches big computing cluster for $152M project backed by Nvidia and NSF

Published

on

Workers install equipment in the data center housing the new Ai2 computing cluster funded by Nvidia and NSF. (Ai2 Photo)

The Allen Institute for AI says it has brought online and started using a powerful new computing system funded by Nvidia and the National Science Foundation, the first big milestone in a $152 million project to build open AI models for scientific research.

Ai2, as the Seattle-based institute is known, was awarded the funding last August as part of the White House AI Action Plan. The project, called Open Multimodal AI Infrastructure for Science, or OMAI, aims to build AI models for fields such as materials science, biology, and energy.

Noah Smith, Ai2 senior research director and principal investigator on the project, called it a “critical step” and said in a statement that the new infrastructure represents a national investment in keeping advanced AI development accessible to the broader research community.

The announcement Thursday comes as Ai2 works to regain its footing after losing its CEO and some of its top researchers to Microsoft in March. Interim CEO Peter Clark outlined Ai2’s priorities this week, saying it’s committed to open models and longer-term research, along with applied AI efforts in areas such as scientific discovery and environmental science. 

Unlike most large-scale AI projects, Ai2 releases the full code, data, and training methods behind its models, allowing other researchers to reproduce and build on the work. 

Advertisement

The new system, located outside of Austin, runs on Nvidia’s Blackwell Ultra chips and is managed by Cirrascale Cloud Services.

Ai2 said research supported by the project has already produced upgrades to its Molmo and OLMo model families, including a new multimodal model capable of video understanding and a more efficient language model architecture. 

The institute said it is now focused on building unified models that handle multiple types of data, developing AI agents, and working more closely with scientific communities to ensure the models are useful for real-world research. 

Source link

Advertisement
Continue Reading

Tech

Samsung’s big One UI 8.5 update is rolling out now

Published

on

Samsung has started rolling out One UI 8.5, and it’s a pretty big one.

The update is landing on more than 40 Galaxy smartphones and tablets. It brings a refreshed interface and a new wave of AI features to devices that, in some cases, are years old.

The rollout began in Korea today, with other regions expected to follow from mid-May. As usual with Samsung updates, it won’t arrive everywhere at once. Even though the list of supported devices is already extensive, some users will still have to wait.

At the front of the queue are Samsung’s newest flagships, including the Galaxy S25, S25+, and S25 Ultra, along with the Galaxy S25 FE and S25 Edge. But the update doesn’t stop there; Samsung is also pushing One UI 8.5 to older generations like the Galaxy S24 and S23 series. This includes their Ultra and FE models.

Advertisement

SQUIRREL_PLAYLIST_10206388

Advertisement

Foldables are included too, with support for the Galaxy Z Fold 5 through Fold 7 and Galaxy Z Flip 5 through Flip 7. Additionally, a wide range of tablets including the Galaxy Tab S9, S10, and S11 series will be supported. Even mid-range and entry-level devices like the Galaxy A05 are on the list.

That breadth is the main story here. Rather than limiting new software to premium devices, Samsung is once again pushing its latest One UI version across almost its entire ecosystem.

Advertisement

So what’s actually new? One UI 8.5 brings a visual refresh in parts of the interface, including updated menus and navigation elements. But the bigger focus is AI. Samsung is expanding tools like Photo Assist, which helps refine and adjust AI-generated images, alongside improvements to its Bixby assistant, which is becoming more context-aware and responsive.

It’s not a radical redesign, but it does continue Samsung’s steady shift toward AI-driven features across both hardware tiers and older devices.

As with most major Android updates, availability will vary depending on region and model. It may take months before every eligible Galaxy device receives it. Still, for a rollout that includes everything from flagship phones to budget A-series models, this is one of Samsung’s broader software updates in recent memory. In effect, it is a free upgrade sitting in the settings menu for millions of users.

Advertisement

Advertisement

Source link

Continue Reading

Tech

Microsoft Issues Warning About Linux ‘Copy Fail’ Vulnerability

Published

on

joshuark shares a report from Linux Magazine: Microsoft has issued a warning that a vulnerability with a CVSS score of 7.8 has been found in the Linux kernel. The vulnerability in question is tagged CVE-2026-31431 and, according to the Cybersecurity and Infrastructure Security Agency (CISA), “This Linux Kernel Incorrect Resource Transfer Between Spheres Vulnerability is a frequent attack vector for malicious cyber actors and poses significant risks to the federal enterprise.”

The distributions affected are Ubuntu, Red Hat, SUSE, Debian, Fedora, Arch Linux, and Amazon Linux. This could also affect any distribution based on those in the list, which means pretty much every Linux distro that isn’t independent. The flaw is found in the Linux kernel cryptographic subsystem’s algif_aead module of AF_ALG. The problem is that a particular optimization has led to the kernel reusing the source memory as the destination during cryptographic operations. What this means is that attackers can take advantage of interactions between the AF_ALG socket interface and a splice() system call. Until patches are released, Microsoft is advising that the affected crypto feature should be disabled, or AF_ALG socket creation should be blocked. The vulnerability is also known as “Copy Fail,” which has been shared on Slashdot and detailed in a technical report. The vulnerability affects almost every version of the Linux OS and is now being exploited in the wild. U.S. cybersecurity agency CISA has ordered all civilian federal agencies to patch any affected systems by May 15.

Source link

Continue Reading

Tech

This State Has A Grace Period For Expired Tags (But It’s Not Long)

Published

on





According to CarFax, the start of 2025 saw an estimated 17 million vehicles with expired tags on the road. So, getting caught driving a car with old tags is likely to be somewhat common, statistically speaking. Luckily, some states give drivers a nice little grace period to get their tags taken care of before they start slapping them with penalties. Texas is one of those states.

Texas state law has a grace period of five working days after expiration where it’s technically still legal to drive a car. Because Saturdays, Sundays, and federal holidays are exempt, a driver might be able to stretch that time to seven or eight days. After that, though, the buffer disappears, and law enforcement can start issuing citations right away. 

After the grace period ends, expired registration can cost up to $200, and potentially even more in some counties. Drivers can also get hit with an additional 20% penalty on their registration renewal cost if they received a ticket before renewing. Texas isn’t the only state with some wiggle room here; Florida’s rules on expired registrations, for example, mean that drivers can only be ticketed for an expired registration at the end of their birth month.

Advertisement

How to avoid a penalty for driving with expired tags in Texas

Just because you got a citation doesn’t mean you have to be stuck with it. In Texas, drivers have certain avenues to reduce their penalties and clean up their driving record. For instance, judges can dismiss a driver’s charges if they renew within 20 working days of being cited, as long as it’s before their first court appearance. If this happens, the only thing a driver will be on the hook for is a small administrative fee of $20.

Charges can also be dismissed if a county tax office was closed for an extended period and the registration has not expired for more than 30 working days. This can sometimes be considered a valid legal defense, but it does not guarantee that your charges will be dismissed. A judge will still have to make that call.

Advertisement

If you’re looking to avoid this hassle, the easiest way to is to renew your registration on time. As long as you don’t have a citation, Texas will let you renew online for up to three months before and a full year after expiration. After that, you’ll get a temporary receipt that lets you drive for up to 31 days while you wait for the new sticker to arrive.



Advertisement

Source link

Continue Reading

Tech

Engineer Builds Real-Life Version of Rocky, the Conversational Alien Robot From Project Hail Mary

Published

on

DIY Custom-Built Rocky Project Hail Mary Robot
Project Hail Mary introduced fans to an unforgettable alien named Rocky. Many who finished the book wanted more time with the character and his quirky way of speaking. One maker decided to satisfy that craving by constructing a physical robot that captures the essence of Rocky in every joint and word.



The engineer behind Leviathan Engineering worked tirelessly on this project for months, bringing Rocky to life. He began with digital models of the character purchased from 3D Totems, a store that does an excellent job of creating 3D models that are precisely perfect. Next, software programs such as Fusion 360 and Tinkercad were used to ensure that the parts were not only printable, but also strong enough to endure some hard treatment.

Months later, the printed components came together to create the body of a four-legged monster with arms that appear to lunge out at you in all the right ways. Ten metal-geared servos provide the driving force behind the movements, cleverly placed to allow the robot’s expressiveness to show. The shoulders each receive an additional servo for arm swings, while the knees of the legs receive one each for low crouches. Movement, gestures, and body language pull the alien’s exuberant personality straight out of the novel. He even gets to offer you a full fist bump, or make a wild arm gesture, just like in the novel.

DIY Custom Built Rocky Project Hail Mary Robot
The robot is powered by a Raspberry Pi 5 that is connected to a circuit board known as the PCA9685 HAT, which operates the servos and controls all of their motions. Power comes from an external supply because the motors draw plenty of current during lively movements. Software brings everything to life. It has speech recognition integrated with Vosk, so you can tell it what to do without experiencing internet lag. Then there’s Piper for the robot’s voice, which has that really unique, staccato talking style that we all love about Rocky. For talks, however, it uses Google’s Gemini model, which essentially determines what the robot says and even what gestures to make based on the situation.

DIY Custom Built Rocky Project Hail Mary Robot
The maker wrote all the code using assistance from Claude through its command line interface. No fixed animation scripts exist. Instead the language model chooses movements based on context through a process called tool calling. Ask for a fist bump and the arm extends while the robot says something like “fist bump yes much happy.”

DIY Custom Built Rocky Project Hail Mary Robot
Of course, like any good maker project, this one had some challenges along the way. The engineer experimented with pulleys and linear actuators before settling on servos since they provided him more control over the entire mess. To ensure that printed joints did not break under any force, he had to do some trial and error before getting it perfect. Then there was the enjoyable task of putting things together, a little hot glue here and little super glue there to keep everything from falling apart. Wires route neatly inside the body thanks to extension cables. The final assembly stands about the size of a small tabletop model yet moves with enough grace to feel like a living creature from the pages of the novel.
[Source]

Source link

Advertisement
Continue Reading

Tech

DIY Electrolysis Machine Removes Hair Permanently

Published

on

If you talk to the FDA, there’s only one permanent method of hair removal—electrolysis. This involves sticking a needle into a hair follicle, getting it very hot or running a current through it, and then letting heat and/or the lye generated kill the root of the hair dead. Normally, you’d pay someone with a commercial machine to do this for you at great expense. Or, you could do it yourself with a home-built machine, as [n3tcat] did.

Based on the available information out in the wild, [n3tcat] decided to build a galvanic electrolysis machine. This specifically passes current through a needle in the hair follicle to generate lye at the hair bulb, which kills it. The amount of lye generated depends on the amount of current and the time over which it is applied. More lye is more likely to kill a follicle permanently, though there are limits with regards to avoiding scarring, other skin damage, and excessive pain.

[n3tcat]’s guide explains the basic theory behind galvanic electrolysis, as well as how the rig was built. An early attempt simply involved hooking up a 12-volt car battery to a standard electrolysis needle, sticking it in a hair, with the other electrode being an aluminium can held by the person being treated. The fun thing was that this allowed varying the current depending on how much contact and how stiffly the person grabbed the can.

After a few successful hair removals this way, [n3tcat] decided to build a better rig. An RP2040 microcontroller was enlisted to run the show, powered by a 3.7-volt lithium rechargeable battery. An OLED screen and a rotary encoder were selected to serve as the interface, while a foot pedal was added for firing off current. A boost converter was used to push the battery voltage up to the vicinity of 15 volts for delivery to the needle, set up to avoid excessive current delivery for safety. A DAC was paired with an LM358 op-amp feeding into a MOSFET to control the current passed to the needle for accurate, controlled treatment, with the RP2040 monitoring the current level via a dedicated ADC. The needle itself got a D-printed pen-like handle for better ergonomics, easing the process of slotting the needle into a hair follicle. Everything was then assembled on a cute PCB, and wrapped up in a nice 3D printed housing. The files are available for the curious.

Electrolysis is a process that can cost many thousands of dollars depending on how much hair you hope to remove. Thus, it’s easy to see the appeal in having a rig that lets you do it at home. It’s just one of those things where you have to take the proper precautions to ensure you’re not unduly hurting yourself.  Stay safe out there, hackers!

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025