Connect with us

Tech

Anthropic’s Claude Code Security is available now after finding 500+ vulnerabilities: how security leaders should respond

Published

on

Anthropic pointed its most advanced AI model, Claude Opus 4.6, at production open-source codebases and found a plethora of security holes: more than 500 high-severity vulnerabilities that had survived decades of expert review and millions of hours of fuzzing, with each candidate vetted through internal and external security review before disclosure.

Fifteen days later, the company productized the capability and launched Claude Code Security.

Security directors responsible for seven-figure vulnerability management stacks should expect a common question from their boards in the next review cycle. VentureBeat anticipates the emails and conversations will start with, “How do we add reasoning-based scanning before attackers get there first?”, because as Anthropic’s review found, simply pointing an AI model at exposed code can be enough to identify — and in the case of malicious actors, exploit — security lapses in production code.

The answer matters more than the number, and it is primarily structural: how your tooling and processes allocate work between pattern-based scanners and reasoning-based analysis. CodeQL and the tools built on it match code against known patterns.

Advertisement

Claude Code Security, which Anthropic launched February 20 as a limited research preview, reasons about code the way a human security researcher would. It follows how data moves through an application and catches flaws in business logic and access control that no rule set covers.

The board conversation security leaders need to have this week

Five hundred newly discovered zero-days is less a scare statistic than a standing budget justification for rethinking how you fund code security.

The reasoning capability Claude Code Security represents, and its inevitable competitors, need to drive the procurement conversation. Static application security testing (SAST) catches known vulnerability classes. Reasoning-based scanners find what pattern-matching was never designed to detect. Both have a role.

Anthropic published the zero-day research on February 5. Fifteen days later, they shipped the product. While it’s the same model and capabilities, it is now available to Enterprise and Team customers.

Advertisement

What Claude does that CodeQL couldn’t

GitHub has offered CodeQL-based scanning through Advanced Security for years, and added Copilot Autofix in August 2024 to generate LLM-suggested fixes for alerts. Security teams rely on it. But the detection boundary is the CodeQL rule set, and everything outside that boundary stays invisible.

Claude Code Security extends that boundary by generating and testing its own hypotheses about how data and control flow through an application, including cases where no existing rule set describes. CodeQL solves the problem it was built to solve: data-flow analysis within predefined queries. It tells you whether tainted input reaches a dangerous function.

CodeQL is not designed to autonomously read a project’s commit history, infer an incomplete patch, trace that logic into another file, and then assemble a working proof-of-concept exploit end to end. Claude did exactly that on GhostScript, OpenSC, and CGIF, each time using a different reasoning strategy.

“The real shift is from pattern-matching to hypothesis generation,” said Merritt Baer, CSO at Enkrypt AI, advisor to Andesite and AppOmni, and former Deputy CISO at AWS, in an exclusive interview with VentureBeat. “That’s a step-function increase in discovery power, and it demands equally strong human and technical controls.”

Advertisement

Three proof points from Anthropic’s published methodology show where pattern-matching ends and hypothesis generation begins.

Commit history analysis across files. GhostScript is a widely deployed utility for processing PostScript and PDF files. Fuzzing turned up nothing, and neither did manual analysis. Then Claude pulled the Git commit history, found a patch that added stack bounds checking for font handling in gstype1.c, and reversed the logic: if the fix was needed there, every other call to that function without the fix was still vulnerable. In gdevpsfx.c, a completely different file, the call to the same function lacked the bounds checking patched elsewhere. Claude built a working proof-of-concept crash. No CodeQL rule describes that bug today. The maintainers have since patched it.

Reasoning about preconditions that fuzzers can’t reach. OpenSC processes smart card data. Standard approaches failed here, too, so Claude searched the repository for function calls that are frequently vulnerable and found a location where multiple strcat operations ran in succession without length checking on the output buffer. Fuzzers rarely reached that code path because too many preconditions stood in the way. Claude reasoned about which code fragments looked interesting, constructed a buffer overflow, and proved the vulnerability.

Algorithm-level edge cases that no coverage metric catches. CGIF is a library for processing GIF files. This vulnerability required understanding how LZW compression builds a dictionary of tokens. CGIF assumed compressed output would always be smaller than uncompressed input, which is almost always true. Claude recognized that if the LZW dictionary filled up and triggered resets, the compressed output could exceed the uncompressed size, overflowing the buffer. Even 100% branch coverage wouldn’t catch this. The flaw demands a particular sequence of operations that exercises an edge case in the compression algorithm itself. Random input generation almost never produces it. Claude did.

Advertisement

Baer sees something broader in that progression. “The challenge with reasoning isn’t accuracy, it’s agency,” she told VentureBeat. “Once a system can form hypotheses and pursue them, you’ve shifted from a lookup tool to something that can explore your environment in ways that are harder to predict and constrain.”

How Anthropic validated 500+ findings

Anthropic placed Claude inside a sandboxed virtual machine with standard utilities and vulnerability analysis tools. The red team didn’t provide any specialized instructions, custom harnesses, or task-specific prompting. Just the model and the code.

The red team focused on memory corruption vulnerabilities because they’re the easiest to confirm objectively. Crash monitoring and address sanitizers don’t leave room for debate. Claude filtered its own output, deduplicating and reprioritizing before human researchers touched anything. When the confirmed count kept climbing, Anthropic brought in external security professionals to validate findings and write patches.

Every target was an open-source project underpinning enterprise systems and critical infrastructure. Small teams maintain many of them, staffed by volunteers, not security professionals. When a vulnerability sits in one of these projects for a decade, every product that pulls from it inherits the risk.

Advertisement

Anthropic didn’t start with the product launch. The defensive research spans more than a year. The company entered Claude in competitive Capture-the-Flag events where it ranked in the top 3% of PicoCTF globally, solved 19 of 20 challenges in the HackTheBox AI vs Human CTF, and placed 6th out of 9 teams defending live networks against human red team attacks at Western Regional CCDC.

Anthropic also partnered with Pacific Northwest National Laboratory to test Claude against a simulated water treatment plant. PNNL’s researchers estimated that the model completed adversary emulation in three hours. The traditional process takes multiple weeks.

The dual-use question security leaders can’t avoid

The same reasoning that finds a vulnerability can help an attacker exploit one. Frontier Red Team leader Logan Graham acknowledged this directly to Fortune’s Sharon Goldman. He told Fortune the models can now explore codebases autonomously and follow investigative leads faster than a junior security researcher.

Gabby Curtis, Anthropic’s communications lead, told VentureBeat in an exclusive interview the company built Claude Code Security to make defensive capabilities more widely available, “tipping the scales towards defenders.” She was equally direct about the tension: “The same reasoning that helps Claude find and fix a vulnerability could help an attacker exploit it, so we’re being deliberate about how we release this.”

Advertisement

In interviews with more than 40 CISOs across industries, VentureBeat found that formal governance frameworks for reasoning-based scanning tools are the exception, not the norm. The most common responses are that the area was considered so nascent that many CISOs didn’t think this capability would arrive so early in 2026.

The question every security director has to answer before deploying this: if I give my team a tool that finds zero-days through reasoning, have I unintentionally expanded my internal threat surface?

“You didn’t weaponize your internal surface, you revealed it,” Baer told VentureBeat. “These tools can be helpful, but they also may surface latent risk faster and more scalably. The same tool that finds zero-days for defense can expose gaps in your threat model. Keep in mind that most intrusions don’t come from zero-days, they come from misconfigurations.”

“In addition to the access and attack path risk, there is IP risk,” she said. “Not just exfiltration, but transformation. Reasoning models can internalize and re-express proprietary insights in ways that blur the line between use and leakage.”

Advertisement

The release is deliberately constrained. Enterprise and Team customers only, through a limited research preview. Open-source maintainers apply for free expedited access. Findings go through multi-stage self-verification before reaching an analyst, with severity ratings and confidence scores attached. Every patch requires human approval.

Anthropic also built detection into the model itself. In a blog post detailing the safeguards, the company described deploying probes that measure activations within the model as it generates responses, with new cyber-specific probes designed to track potential misuse. On the enforcement side, Anthropic is expanding its response capabilities to include real-time intervention, including blocking traffic it detects as malicious.

Graham was direct with Axios: the models are extremely good at finding vulnerabilities, and he expects them to get much better still. VentureBeat asked Anthropic for the false-positive rate before and after self-verification, the number of disclosed vulnerabilities with patches landed versus still in triage, and the specific safeguards that distinguish attacker use from defender use. The lead researcher on the 500-vulnerability project was unavailable, and the company declined to share specific attacker-detection mechanisms to avoid tipping off threat actors.

“Offense and defense are converging in capability,” Baer said. “The differentiator is oversight. If you can’t audit and bound how the tool is used, you’ve created another risk.”

Advertisement

That speed advantage doesn’t favor defenders by default. It favors whoever adopts it first. Security directors who move early set the terms.

Anthropic isn’t alone. The pattern is repeating.

Security researcher Sean Heelan used OpenAI’s o3 model with no custom tooling and no agentic framework to discover CVE-2025-37899, a previously unknown use-after-free vulnerability in the Linux kernel’s SMB implementation. The model analyzed over 12,000 lines of code and identified a race condition that traditional static analysis tools consistently missed because detecting it requires understanding concurrent thread interactions across connections.

Separately, AI security startup AISLE discovered all 12 zero-day vulnerabilities announced in OpenSSL’s January 2026 security patch, including a rare high-severity finding (CVE-2025-15467, a stack buffer overflow in CMS message parsing that is potentially remotely exploitable without valid key material). AISLE co-founder and chief scientist Stanislav Fort reported that his team’s AI system accounted for 13 of the 14 total OpenSSL CVEs assigned in 2025. OpenSSL is among the most scrutinized cryptographic libraries on the planet. Fuzzers have run against it for years. The AI found what they were not designed to find.

The window is already open

Those 500 vulnerabilities live in open-source projects that enterprise applications depend on. Anthropic is disclosing and patching, but the window between discovery and adoption of those patches is where attackers operate today.

Advertisement

The same model improvements behind Claude Code Security are available to anyone with API access.

If your team is evaluating these capabilities, the limited research preview is the right place to start, with clearly defined data handling rules, audit logging, and success criteria agreed up front.

Source link

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

The Last Mystery of Antarctica’s ‘Blood Falls’ Has Finally Been Solved

Published

on

There is a corner of Antarctica that looks like something out of a David Cronenberg movie. It’s located in the dry valleys of McMurdo, an immense frozen desert where, periodically, a jet of crimson liquid suddenly gushes from the dazzling white of the Taylor Glacier. They’re called the Blood Falls, and since their discovery in 1911 by geologist Thomas Griffith Taylor, they’ve fueled a century of scientific speculation.

Recently, a series of observations conducted since 2018 have clarified several mysteries, such as the nature of their reddish color and what keeps them liquid at almost –20 degrees Celsius. New research published this week in the journal Antarctic Science adds the final piece to the puzzle, clarifying what phenomena drive the falls to gush from underground.

The Science Behind the Blood Falls

At the time of their discovery, Taylor attributed the color to the presence of red microalgae. More than a century later, scientists have determined that the red is due to iron particles trapped in nanospheres along with other elements such as silicon, calcium, aluminum, and sodium. These were likely produced by ancient bacteria trapped underground in the area: Once in contact with air, the iron oxidizes, giving the mixture its characteristic rust color.

As for the presence of liquid water, it is actually a hypersaline brine, formed about 2 million years ago when the waters of the Antarctic Ocean receded from the valleys. The very high salinity of this brine prevents the water from freezing, thus allowing it to gush out periodically.

Advertisement

The New Discovery

With the temperature puzzle solved, the question remained as to what physically drove the fluid to erupt. The answer came from cross-referencing GPS data, thermal sensors, and high-resolution images collected in 2018 during an eruption. The analysis demonstrated that the Blood Falls are the result of pressure variations affecting the brine deposits beneath the glacier.

As Taylor Glacier slides downstream, the overlying ice mass compresses the subglacial channels, building up tremendous pressure. When the strain becomes unbearable, the ice gives way: Pressurized brine seeps into the crevices and is shot out in short bursts. Curiously, this release acts as a hydraulic brake, temporarily slowing the glacier’s march. With this discovery, the mysteries of the Blood Falls should finally have been solved, at least for now. The impact of global warming on this complex system in the coming decades remains unknown.

This story originally appeared on WIRED Italia and has been translated from Italian.

Source link

Advertisement
Continue Reading

Tech

A Blood Pressure Monitor for Smartwatches

Published

on

Your smartwatch can track a lot of things, but at least for now, it can’t keep an accurate eye on your blood pressure. Last week researchers from University of Texas at Austin showed a way you smartwatch someday could. They were able to discern blood pressure by reflecting radio signals off a person’s wrist, and they plan to integrate the electronics that did it into a smartwatch in a couple of years.

Beside the tried-and-true blood pressure cuff, researchers in general have found several new ways to monitor blood pressure using pasted-on ultrasound transducers, electrocardiogram sensors, bioimpedance measurements, photoplethysmography, and combinations of these measurements.

“We found that existing methods all face limitations,” Yiming Han, a doctoral candidate in the lab of Yaoyao Jia told engineers at the IEEE International Solid State Circuits Conference (ISSCC) last week in San Francisco. For example, ultrasound sensing requires long-term contact with the skin. And as cool as electronic tattoos seem, they’re not as convenient or comfortable as a smartwatch. Photoplethysmography, which detects the oxygenation state of blood using light, doesn’t need direct contact, and indeed researchers in Tehran and California recently used it and a heavy dose of machine learning to monitor blood pressure. However, these sensors are thought to be sensitive to a person’s skin tone and were blamed for Black people in the United States getting inadequate treatment during the COVID-19 pandemic.

The University of Texas team sought a non-contact solution that was immune to skin-tone bias and could be integrated into a small device.

Advertisement

Continuous Blood Pressure Monitoring

Blood pressure measurements consist of two readings—systole, the peak pressure when the heart contracts and forces blood into arteries, and diastole, the phase in between heart contractions when pressure drops. During systole, blood vessels expand and stiffen and blood velocity increases. The opposite occurs in diastole.

All these changes alter conductivity, dielectric properties, and other tissue properties, so they should show up in reflected near-field radio waves, Jia’s colleague Deji Akinwande reasoned. Near-field waves are radiation impacting a surface that is less than one wavelength from the radiation’s source.

The researchers were able to test this idea using a common laboratory instrument called a vector network analyzer. Among its abilities, the analyzer can sense RF reflection, and the team was able to quickly correlate the radio response to blood pressure measured using standard medical equipment.

What Akinwande and Jia’s team saw was this: During systole, reflected near-field waves were more strongly out of phase with the transmitted radiation, while in diastole the reflections were weaker and closer to being in phase with the transmission.

Advertisement

You obviously can’t lug around a US $50,000 analyzer just to keep track of your blood pressure, so the team created a wearable system to do the job. It consists of a patch antenna strapped to a person’s wrist. The antenna connects to a device called a circulator—a kind of traffic roundabout for radio signals that steers outgoing signals to the antenna and signals coming in from the antenna to a separate circuit. A custom-designed integrated circuit feeds a 2.4 gigahertz microwave signal into one of the circulator’s on-ramps and receives, amplifies, and digitizes the much weaker reflection coming in from another branch. The whole system consumes just 3.4 milliwatts.

“Our work is the only one to provide no skin contact and no skin-tone bias,” Han said.

The next version of the device will use multiple radio frequencies to increase accuracy, says Jia, “because different people’s tissue conditions are different” and some might respond better to one or another. Like the 2.4 gigahertz used in the prototype these other frequencies will be of the sort already in common use such as 5 GHz (a Wi-Fi frequency) and 915 megahertz (a cellular frequency).

Following those experiments, Jia’s team will turn to building the device into a smartwatch form factor and testing them more broadly for possible commercialization.

Advertisement

From Your Site Articles

Related Articles Around the Web

Source link

Advertisement
Continue Reading

Tech

Railguns: Making Metal Go Fast Using The Lorentz Force

Published

on

In science fiction, the use of gunpowder-based weapons is generally portrayed as something from a savage past, with technology having long since moved on to more civilized types of destructive weaponry, involving lasers, microwaves, and electromagnetism. Instead of messy detonating powder, energy-weapons are used to near-instantly deposit significant amounts of energy into the target, and railguns enable the delivery of projectiles at many times the speed of sound using nothing but the raw power of electricity and some creative physics.

Of course, the reason that we don’t see sci-fi weapons deployed everywhere has arguably less to do with today’s levels of savagery in geopolitics and more with the fact that physical reality is a very harsh mistress, who strongly frowns upon such flights of fancy.

Similarly, the Lorentz force that underlies railguns is extremely simple and effective, but scaled up to weapons-grade dimensions results in highly destructive forces that demolish the metal rails and other components of the railgun after only a few firings. Will we ever be able to fix these problems, or are railguns and similar sci-fi weapons forever beyond our grasp?

The Lorentz Force

A very simple homopolar motor. Here the neodymium magnet and screw spin whenever the wire conducts current. (Credit: Windell H. Oskay, Wikimedia)
A very simple homopolar motor. Here the neodymium magnet and screw spin whenever the wire conducts current. (Credit: Windell H. Oskay, Wikimedia)

The simplest way to think about a railgun is as a linear motor. At its core it consists of two parallel conductors — the rails — with an armature that slides across these rails as it conducts the power between the two rails. This also makes it the equivalent of a homopolar motor, which was the first type of electric motor to be demonstrated.

In the photo on the right you can see a basic example of such a motor, with the neodymium magnet providing the magnetic field and the singular wire the current that interacts with the magnetic field. Using the right-hand rule that was hammered into our heads during high school physics classes we can thus deduce that we get a net force.

Advertisement

With this hand-held demonstration the screw will rotate when current is passed through the wire. For stand-alone homopolar motors with the magnet on the battery’s negative terminal and a conductor loosely placed on the positive terminal while touching the magnet, the Lorentz force will cause the wire to rotate around the battery.

Right-hand rule. (Credit: Jfmelero, Wikimedia)
Right-hand rule. (Credit: Jfmelero, Wikimedia)

We can visualize this interaction between the current-carrying wire (I), the magnetic field (B) and resulting force vector (F) in such a homopolar motor fairly easy, but how does this work with a railgun?

Railgun forces. (Source: Wikimedia)
Railgun forces. (Source: Wikimedia)

Rather than a permanent magnet or a complex electromagnet on each rail using many windings, a single current loop is used in a railgun. This means that massive amounts of currents are pumped through one rail, which induces a sufficient strong magnetic field.
The projectile, playing the role of the armature, is located inside the generated magnetic field B, with the current I coursing through the armature, resulting in a net force F that will push it along the rails at a velocity that’s proportional to the strength of B.

Crudely put, the effective speed of a project launched by a railgun is thus determined by the applied current, so unlike it’s close cousin, the coilgun, there is no tricky timing requirement in energizing coils in a sequence.

This also provides some hints as to what major obstacles with railguns are, starting with the immense currents that have to be immediately available for a railgun shot of any significant size. If this is somehow engineered around using massive capacitor banks, then you run into the much more significant issues that have so far prevented railguns from being widely deployed.

Most of this comes down to wear and tear, because going fast comes with certain tradeoffs.

Advertisement

Making Big Stuff Go Fast

Electromagnetic railgun (EMRG) at the Dahlgren testing grounds in 2017. (Credit: US Office of Naval Research)
Electromagnetic railgun (EMRG) at the Dahlgren testing grounds in 2017. (Credit: US Office of Naval Research)

Theoretically you can just scale everything up: creating railguns with larger rails and larger armatures that can launch larger projectiles with increasingly faster speeds. This has been the impetus behind various railgun projects across the world, with notable examples being the railguns developed and tested by the US and Japan.

Railguns were invented all the way back in 1917 by French inventor André Louis Octave Fauchon-Villeplée, when the issue of the massive electricity consumption kept further research on a fairly low level. Even the tantalizing prospect of a weapon system capable of firing at velocities of more than 2,000 m/s couldn’t get into deployment during the time that Nazi Germany was working on their own version.

Ultimately it would take until the 1980s for railgun designs to become practical enough to start testing them for potential deployment at some point in the future, seeing a surge of R&D investment for it and other new weapon systems that could provide an edge during the Cold War and beyond.

Yet despite decades of research by the US military, no viable design has so far appeared, and research has wound down over the past years. Although both China and India are testing their own railgun designs, there are no signs at this point that they haven’t run into the same issues that caused the US to mostly cease research on this topic.

Only Japan’s railgun research seems to so far offer a viable design for deployment, but their focus is purely defensive, for countering ballistic and hypersonic missiles in a close-in role. The size is also limited to the current 40 mm prototype by Japan’s Ministry of Defense ATLA agency.

Advertisement

Physical Reality

In a perfect world with zero friction and spherical cows, railguns would be very simple and straightforward, but as we live in messy reality we have to deal with the implications of sending immense amounts of currents through a railgun barrel. A good primer here can be found in a June 1983 report (archived) by O. Fitch and M. F. Rose at the Dahlgren Naval Surface Weapons Center in Virginia.

Mass driver efficiency formula. (From: O. Fitch et al., 1983)
Mass driver efficiency formula. (From: O. Fitch et al., 1983)

Much of this comes down to efficiency as you scale up a basic railgun design. The two main factors are basic ohmic resistance (ER) and system inductance (ES). These two factors limit the kinetic energy (EK) and set the losses (EL) of the system, with the losses being in the form of thermal and other energies.

Reducing these losses is one of the primary points of research, and factors like the rail design and alloys as well as the switching of the current pulses play a role in affecting final efficiency, and with it durability of the railgun’s ‘barrel’.

Naturally, that was all the way back in 1983, and since then a few decades of technical and material science progress having occurred. Or so one might be led to believe, if it wasn’t for current research papers striking a rather similar tone. For example Hong-bin Xie et al. in a 2021 paper as published in Defence Technology.

Solid vs arc contact in a railgun. (From: Hong-bin Xie, et al., 2021)
Solid vs arc contact in a railgun. (From: Hong-bin Xie, et al., 2021)

This review article covers the common issues of rail gouging, grooving, arc ablation, and other problems, as well as the current rail materials in use today and their performance characteristics.

Many of these issues are somewhat related, as the moving armature rarely maintains a perfect contact with the rails. This results in arcing, localized heating, ablation, and grooving due to thermal softening. All of these effects result in a rapidly degrading rail surface, and higher currents result in more rapid degradation and even worse contact with subsequent shots.

Advertisement

Various rail metal alloys have been or are being tested, including Cu-Cr, Cu-Cr-Zr and Cu/Al2O3, replacing the pure copper rails of the past. None of these alloys can resist the pitting and other wear effects from repeated railgun firings, however. This has pivoted research towards various coatings that could limit wear instead, such as molybdenum (Mo) or tungsten (W).

Fields of research involve electroplating, cold spraying, supersonic plasma spraying and laser cladding, using a wide variety of coatings. The authors note however that these rail coatings have only begun to be investigated, with success anything but assured.

Defensive Benefits

USS Iowa (BB-61) Fires a full broadside of nine 16/50 and six 5/38 guns during a target exercise near Vieques Island, Puerto Rico, 1 July 1984. Photographed by PHAN J. Alan Elliott. Note concussion effects on the water surface, and 16-inch gun barrels in varying degrees of recoil. Official U.S. Navy Photograph, from the the Department of Defense Still Media Collection.
USS Iowa (BB-61) Fires a full broadside of nine 16/50 and six 5/38 guns during a target exercise near Vieques Island, Puerto Rico, 1 July 1984. (Source: US Navy)

Quite recently railguns have surged to the forefront in the news cycle courtesy of certain ill-informed fantasies that also involve destroyers which identify as battleships. In these feverish battleship dreams, railguns would act as a kind of super-charged version of the 16″ main guns of the Iowa-class, the last active battleships in history.

Instead of 16″ shells that ponderously arc towards their decidedly doomed target, these railguns would instead send a projectile at a zippy 2-3 km/s towards a target. As tempting as this seems, the big issue is as we have seen of repeatability. The Iowas originally had a barrel life of a few hundred shots before their liner had to be replaced, but this got bumped up to basically ‘infinite’ shots after some changes to their chemical propellant.

A single Mark 7 16″ naval gun fires twice per minute, and this is multiplied by nine if all three turrets are used. The range of projectiles launched included high-explosive, armor-penetrating, and even nuclear shell options, with a range of 39 km (21 nmi) at a leisurely ~800 m/s. To compete with this, a naval railgun would need to be able to keep up a similar firing rate, feature a similar barrel or at least acceptable barrel life, and have a longer range for a similar payload effect.

Advertisement

At this point railguns score pretty poorly on all these counts. Although range of a projectile falls between that of a missile and a Mark 7 naval gun’s projectile, barrel life is still poor, power usage remains very high and the available projectiles at this point in time are basically just relying on their kinetic energy to cause harm, limiting their functionality.

Taking all of this into account, it would seem that the Japanese approach using railguns as a very responsive, close-in weapon is extremely sensible. By keeping the design as small-caliber as possible, reducing rail current, and not caring about range as long as you can hit that hypersonic anti-ship missile, they seem to be keeping rail erosion to a minimum.

Since the average missile tends to perform rather poorly after a 40 mm hole appears through it, courtesy of it briefly sharing the same physical space with a tungsten projectile, this might just be the defensive weapon niche that rail guns can fill.

Advertisement

Source link

Continue Reading

Tech

Using AI to improve wastewater management

Published

on

This Belfast-based company uses machine learning and hyperlocal rainfall forecasting to predict sewer levels, detect blockages and optimise the performance of wastewater networks.

Brian Moloney has spent many years working in the area of environmental engineering.

After obtaining a degree in civil, structural and environmental engineering from Trinity College Dublin, Moloney spent more than 15 years working in drainage and flood prevention, having led major civil engineering projects in Ireland, the UK and Australia.

This civil engineering experience allowed him to see an opportunity for a data-driven approach to tackle pollution and flooding, leading him to co-found our latest Start-up of the Week – StormHarvester.

Advertisement

StormHarvester is a Belfast-based start-up that uses AI to help wastewater utilities better manage their networks and prevent serious flooding and pollution. The start-up achieves this by using AI to monitor rainfall and wastewater networks, providing real-time insights.

“Urbanisation, climate change and population growth are putting huge strain on our water supply systems,” says Moloney. “This is resulting in increased threats of flooding and pollution.

“At StormHarvester, we use machine learning and hyperlocal rainfall forecasting to predict sewer levels, detect blockages and inflow, and optimise the performance of wastewater networks.”

How it works

As Moloney – who is also CEO of the company – tells SiliconRepublic.com, StormHarvester’s initial work focused on understanding the relationship between rainfall and drainage networks.

Advertisement

“Once this was understood, we focused on predicting the future network performance using rainfall datasets,” he says. “After investing time and effort into machine learning, our CTO Stevie Gallagher and I created a quality blockage and anomaly detection product which helped us win our first major competition, winning Wessex Water and beating many established industry analytics providers.”

Today, Moloney says the start-up works with 11 UK wastewater utilities and has onboarded “tens of thousands” of sensors globally.

StormHarvester has released a number of products since its establishment, encompassing a range of areas including inflow and infiltration detection, blockage detection, pump station alerting, rising main alerting and spill verification.

“Our advanced anomaly detection system analyses data from thousands of sensors, turning it into precise, actionable insights that drive smarter decisions,” says Moloney. “Proactive real-time monitoring allows utilities to have visibility over their network, prevent issues before they escalate and move from lagging indicators to live insights.”

Advertisement

How it’s going

To date, StormHarvester has hit a number of milestones.

“In the last year alone, we have doubled our headcount, fueling our expansion and growth strategy further to create exciting opportunities globally,” says Moloney.

According to Moloney, the company has deployed more than 270,000 sensors worldwide, and in January 2025, StormHarvester announced plans to double its workforce over three years and expand into new countries after raising £8.4m in Series A funding.

Meanwhile, in December, StormHarvester was named as Ireland’s fastest-growing technology company at the annual Deloitte Technology Fast 50 awards, which ranks Ireland’s 50 fastest-growing tech companies based on revenue growth over a four-year period.

Advertisement

But while the company experienced rapid scaling, Moloney says this introduced a challenge for the team.

“As we grew, we hired quickly, introduced more structure and refined processes while trying to keep culture and communication consistent,” he explains. “Balancing fast growth with maintaining alignment was a challenge.”

Currently, Moloney says the company is planning further expansion. He says the start-up’s successful move into Australia and New Zealand has shown that StormHarvester can “scale sustainably while keeping our culture and quality intact” – adding that the company is now preparing for entry into the US market.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Tech

Discord Distances Itself From Persona Age Verification After User Backlash

Published

on

Discord is attempting to distance itself from the age verification provider Persona following a steady stream of user backlash. From a report: In an emailed statement to The Verge, Discord’s head of product policy, Savannah Badalich, confirms the company “ran a limited test of Persona in the UK where age assurance had previously launched and that test has since concluded.”

After Discord announced plans to implement age verification globally starting next month, users across social media accused Discord of “lying” about how it plans on handling face scans and ID uploads. Much of the criticism was directed toward Discord’s partnership with Persona, an age verification provider also used by Reddit and Roblox.

Source link

Continue Reading

Tech

America's spymasters terrified Tim Cook with Taiwan invasion timeline

Published

on

Apple CEO Tim Cook lost sleep after the CIA briefed him four years ago that China would move on Taiwan by 2027. With that day approaching, not enough has been done about it.

Tim Cook, CEO of Apple, with a surprised look on his face
Tim Cook reportedly said he has slept “with one eye open” after his CIA briefing — image credit: Apple

Apple has been reshoring some manufacturing to the US, in initiatives that have been known for years. But now according to The New York Times, Apple and others also had a classified CIA briefing that warned how precarious chip manufacturing is in Taiwan, but have failed to heed it.
Tim Cook from Apple, Jensen Huang of Nvidia, Lisa Su of Advanced Micro Devices, and Qualcomm CEO Cristiano Amon were briefed in July 2023. Following the briefing, Apple’s Tim Cook is reported to have said that he slept “with one eye open.”
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Panasonic unveils 2026 TV line-up and big changes to its TV division

Published

on

Panasonic has announced its full 2026 European TV lineup, headlined by new OLED models and a expanded Mini-LED range.

The new series focuses on brighter panels and larger screen options up to 86-inches. In addition, it offers improved viewing in well-lit rooms thanks to new Glare Free technology.

The range spans OLED, QD Mini-LED, QLED, 4K LED and 2K LED. Smart platforms vary by region and model — including Fire TV built in, Google TV, Roku and TiVo. Notably, 2026 marks Panasonic’s first Roku-powered models in the UK.

At the top of the lineup sits the Z95B OLED, which carries over as Panasonic’s flagship model in 55-, 65- and 77-inch sizes with LG Display’s Primary RGB Tandem Panel married with Panasonic’s own race-inspired ThermalFlow cooling system for brighter images without compromising on colour accuracy. The Z95B won Trusted Review’s best TV award of 2025

Advertisement

Below it, the fantastic Z90B OLED returns as a more accessible premium option while in the OLED line-up is the Z85C (Europe) and Z86C (UK) models which brings a new 120Hz OLED panel into the fold at this price point. The UK will get Fire TV support, while other European territories it’ll be Google TV.

Advertisement

There will be more Mini LED models this year, with the The W97C / W95C QD Mini LED models that feature over 1,000 local dimming zones, up to 1500 nits peak brightness, and claim they can hit 105% DCI-P3 coverage while for gamers there’s VRR suppprt up to 144Hz.

These sets also debut Panasonic’s Glare Free Ultra, which is aimed at reducing reflections in bright living rooms without washing out colours and contrast.

Advertisement

Further down the range, Panasonic is expanding (quite literally) its QLED and LED offerings with screen sizes that range from a compact 32-inches up to 86-inches.

However, the biggest news isn’t the newer models that will feature in Panasonic’s line-up, but that the TV division has had a big shake-up.

At its Panasonic Experience event held in Ottobrun, Germany; Panasonic annoucned that it had entered into a strategic partnership with Shenzhen Skyworth Display Technology Co Ltd, not too dissimilar to what’s happened with Sony and TCL. Though if you read between the lines, this is less a new venture between two companies and closer to Panasonic’s home TV division now under the jurisdiction of Skyworth from April 1st onwards.

These are interesting times for the TV industry as the sands continue to shift in seismic ways.

Advertisement

Advertisement

Source link

Continue Reading

Tech

Russia Targets Telegram as Rift With Founder Pavel Durov Deepens

Published

on

Russia has opened an investigation into Telegram founder Pavel Durov for “abetting terrorist activities,” [non-paywalled source] in the latest sign that his uneasy relationship with the Kremlin has broken down. From a report: Two Russian newspapers, including the state-run Rossiiskaya Gazeta and Kremlin-friendly tabloid Komsomolskaya Pravda, alleged on Tuesday that the messaging app had become a tool of western and Ukrainian intelligence services.

The articles, credited to materials from Russia’s FSB security service, accused Telegram of enabling attacks in Russia and said that Durov’s “actions … are under criminal investigation.” Russia has restricted Telegram’s functions, accusing it of flouting the law and is seeking to divert users towards Max, a state-run rival messenger. The steps escalate pressure on a platform that remains deeply embedded in Russian public life.

Source link

Continue Reading

Tech

Microsoft is using NPUs to automatically capture Xbox game highlights

Published

on


Sources recently informed Windows Central that Xbox Insiders who own Asus ROG Xbox Ally handheld gaming PCs can test a feature that uses the system’s embedded NPU to capture notable gaming moments. The functionality works without interrupting gameplay.
Read Entire Article
Source link

Continue Reading

Tech

Nvidia’s Q4 results could make or break confidence in the AI hardware market

Published

on

Nvidia has become shorthand for the AI market itself. In the years since generative models reshaped computing, the company’s GPUs have powered everything from large-scale training clusters to real-time inference infrastructure.

That dominance helped Nvidia’s stock surge over 1,500 percent from 2022 into 2025 and made it one of the most valuable tech firms in history.

Yet as its newest earnings report approaches, investors aren’t just asking whether revenue is growing, they’re asking whether the AI boom still has room to run.

Scaling AI isn’t just about silicon anymore

Analysts expect Nvidia to post another blockbuster quarter, with revenue forecasts between roughly $65 billion and $66 billion and adjusted gross margins near 75 percent.

Advertisement

That kind of performance would mark continued strength in demand for high-end AI accelerators, particularly from cloud providers and hyperscalers that underpin much of the industry’s infrastructure.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

On the surface, those numbers look almost routine at this point, after all, Nvidia has beaten estimates for revenue and earnings for more than a dozen straight quarters. But markets have shifted, and so has investor psychology.

Advertisement

The question now isn’t just “how much growth?”, but “for how long?” and “toward what?”

One reason for that shift is the growing push by major AI users to develop or adopt alternatives to Nvidia’s hardware.

Meta, Google and other hyperscalers are investing heavily in custom silicon or alternative accelerators designed to cut costs, optimize specific workloads, or gain strategic independence from Nvidia’s ecosystem.

Those moves don’t immediately undercut Nvidia’s sales, but they signal a longer-term competitive environment that didn’t exist a few years ago.

Advertisement

This isn’t entirely new, the chip industry has always been cyclic and competitive, but it matters more now because so much of global AI infrastructure hangs off a single architecture. When customers start hedging that exposure, it naturally ripples through valuations and strategic forecasts.

Investor expectations are part of the story

Another reason this earnings cycle feels different is the backdrop in broader markets. AI names have led the rally in tech stocks, but sentiment has softened.

Over the first weeks of 2026, Nvidia’s share price has barely budged compared with steep gains in previous years, even as other industries waver under economic uncertainty.

Some analysts read this as a sign that markets are increasingly focused on profitability timelines and real-world deployment metrics rather than narrative alone.

Advertisement

Part of that recalibration reflects broader anxiety about what some observers call an “AI bubble,” where valuations in the sector may be disconnected from underlying economic fundamentals.

Whether or not that label is fair, it reflects genuine investor nervousness about sustainability, return on investment, and how soon large companies will convert AI hype into consistent revenue growth.

What Nvidia can and must deliver

For Nvidia, this means earnings won’t be judged simply on topline figures. The market will be listening closely to a few specific signals:

  • Demand trajectory from hyperscalers and cloud providers. Are capex cycles still accelerating, or showing signs of plateauing?
  • Guidance on future quarters. Vague or cautious outlooks could spook markets that have priced high growth into Nvidia’s valuation.
  • Comments on competitive strategy, particularly around partnerships, software ecosystems, and how the company plans to respond to custom silicon trends.
  • Supply chain and geopolitical risks, including memory pricing and export restrictions that affect where Nvidiacan sell its most advanced chips.

A strong earnings beat with confident guidance could reassure markets that AI spending isn’t slowing and that Nvidia remains the core engine of that demand. A modest beat or mixed signals, however, might validate some of the more cautious narratives and lead to broader tech sell-offs.

Nvidia’s report matters because it has become the default bellwether for AI infrastructure spending, and by extension, for how investors value growth in technology sectors.

Advertisement

If the company shows that demand and pricing power remain robust, it supports a broader bull case for AI adoption. If not, we may see a re-rating of AI as an investment theme, with implications far beyond one company’s earnings call.

In that sense, this quarter isn’t just about chips or quarterly revenue. It’s about confidence: in AI’s staying power, in enterprise capex cycles, and in the narrative that has driven one of the most remarkable growth stories in recent market history.

You can find the financial report here

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025