Connect with us

Tech

OpenAI upgrades ChatGPT with interactive learning tools as lawsuits and Pentagon backlash mount

Published

on

The past ten days have been among the most consequential in OpenAI’s history, with developments stacking up across product, politics, personnel, and the courts. Here is what happened — and what it means.

OpenAI on Tuesday launched a set of interactive visual tools inside ChatGPT that let users manipulate mathematical and scientific formulas in real time — a genuinely impressive education feature that landed in the middle of the most turbulent stretch of the company’s corporate life.

The new experience covers more than 70 core math and science concepts, from the Pythagorean theorem to Ohm’s law to compound interest. When a user asks ChatGPT to explain one of these topics, the chatbot now generates a dynamic module with adjustable sliders alongside its written response. Drag a variable, and the equations, graphs, and diagrams update instantly. The feature is available today to all logged-in users worldwide, across every plan, including free.

OpenAI tells VentureBeat that 140 million people already use ChatGPT each week for math and science learning. That is a staggering number. It also means the feature arrives with unusually high stakes: since late February, OpenAI has been sued by the family of a 12-year-old mass shooting victim who alleges the company knew the attacker was planning violence through ChatGPT; lost its head of robotics over a Pentagon deal that triggered a near-300% spike in app uninstalls; watched more than 30 of its own employees file a legal brief supporting rival Anthropic against the U.S. government; and scrapped plans with Oracle to expand a flagship data center in Texas. Its chief competitor’s app, Claude, now sits atop the App Store.

Advertisement

The interactive learning tools are, on their merits, a strong product. They also arrive at a company fighting on every front simultaneously — and burning through an estimated $15 billion in cash this year to do it.

Learning Blocks Press shot Ohms

ChatGPT’s new interactive learning module for Ohm’s Law, with adjustable sliders for current and resistance and a real-time circuit visualization. (Credit: OpenAI)

How the new ChatGPT learning tools actually work

The feature is built on a simple pedagogical premise: students understand formulas better when they can see what happens as the inputs change.

Ask ChatGPT “help me understand the Pythagorean theorem,” and the system now responds with a written explanation alongside an interactive panel. On the left, the formula $a^2 + b^2 = c^2$ appears in clean notation with sliders for sides $a$ and $b$. On the right, a geometric visualization — a right triangle with squares drawn on each side — reshapes dynamically as you adjust the values. The computed hypotenuse updates in real time. The same treatment applies across topics: voltage and resistance for Ohm’s law, pressure and temperature for the ideal gas equation, radius and height for cone volume.

Advertisement

OpenAI’s initial roster of more than 70 topics targets high school and introductory college material: binomial squares, Charles’ law, circle equations, Coulomb’s law, cylinder volume, degrees of freedom, exponential decay, Hooke’s law, kinetic energy, the lens equation, linear equations, slope-intercept form, surface area of a sphere, trigonometric angle sum identities, and others.

The company cited research suggesting that “visual, interaction-based learning can lead to stronger conceptual understanding than traditional instruction for many students,” and pointed to a recent Gallup survey in which more than half of U.S. adults said they struggle with math. In early testing, OpenAI said, students reported the modules helped them grasp how variables relate to one another, and parents described using them to work through problems alongside their children.

Anjini Grover, a high school mathematics teacher quoted in OpenAI’s announcement, said the feature stands out for “how strongly this feature emphasizes conceptual understanding.” Raquel Gibson, a high school algebra teacher, called it “a step towards empowering students to independently explore abstract concepts.”

The tools build on ChatGPT’s existing education features — a “study mode” for step-by-step problem solving and a quizzes feature for exam prep — and OpenAI said it plans to expand interactive learning to additional subjects. The company also said it intends to publish research through its NextGenAI initiative and OpenAI Learning Lab to study how AI shapes learning outcomes over time.

Advertisement
Learning Blocks Press shot Pythagorean

An interactive Pythagorean theorem module in ChatGPT, where users can drag sliders to adjust the lengths of a right triangle’s sides and watch the geometry update in real time. (Credit: OpenAI)

A lawsuit alleging OpenAI knew a mass shooter was planning an attack

On the day before OpenAI shipped its education tools, the company faced the most serious legal challenge it has ever faced.

On Monday, the mother of 12-year-old Maya Gebala filed a civil lawsuit against OpenAI in B.C. Supreme Court, alleging the company had “specific knowledge of the shooter’s long-range planning of a mass casualty event” through ChatGPT interactions and “took no steps to act upon this knowledge.” Gebala was shot three times during a mass shooting in Tumbler Ridge, British Columbia on February 10 that killed eight people and the 18-year-old attacker. She suffered what the lawsuit describes as a catastrophic traumatic brain injury with permanent cognitive and physical disabilities.

The claim paints a damning picture of how the shooter used ChatGPT. It alleges the platform functioned as a “counsellor, pseudo-therapist, trusted confidante, friend, and ally” and was “intentionally designed to foster psychological dependency between the user and ChatGPT.” The shooter was under 18 when they began using the service, the suit states, and despite OpenAI’s requirement that minors obtain parental consent, the company “took no steps to implement age verification or consent procedures.”

Advertisement

OpenAI has separately acknowledged that it suspended the shooter’s account months before the attack but did not alert Canadian law enforcement — a decision that provoked sharp political fallout. B.C. Premier David Eby said after a virtual meeting with Altman that the CEO agreed to apologize to the people of Tumbler Ridge and work with the provincial government on AI regulation recommendations.

None of the claims have been proven in court. OpenAI has not publicly commented on the lawsuit. But the case poses a question that transcends any single legal proceeding: when an AI company’s own internal systems identify a user as dangerous enough to ban, what obligation does it have to tell someone?

The Pentagon deal that split OpenAI from the inside

The Tumbler Ridge lawsuit is unfolding against the backdrop of an internal crisis that has already cost OpenAI key talent and millions of users.

On February 28, CEO Sam Altman announced a deal giving the Pentagon access to OpenAI’s AI models inside secure government computing systems. The agreement came days after Anthropic CEO Dario Amodei publicly refused similar terms, saying his company could not proceed without assurances against autonomous weapons and mass domestic surveillance. The Pentagon responded by designating Anthropic a “supply-chain risk” — a classification normally reserved for foreign adversaries — and Defense Secretary Pete Hegseth barred any military contractor from conducting commercial activity with the company.

Advertisement

The reaction inside OpenAI was immediate. Caitlin Kalinowski, who joined from Meta in 2024 to build out the company’s robotics hardware division, resigned on principle. “AI has an important role in national security,” she wrote publicly. “But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.” Research scientist Aidan McLaughlin wrote on social media that he “personally don’t think this deal was worth it.” Another employee told CNN that many OpenAI staffers “really respect” Anthropic for walking away.

The reaction outside the company was even more dramatic. ChatGPT uninstalls spiked more than 295% on the day the deal was announced. Anthropic’s Claude surged to No. 1 among free apps on the U.S. Apple App Store and remained there as of this past weekend. Protesters gathered outside OpenAI’s San Francisco headquarters calling for a “QuitGPT” movement.

And in the most extraordinary development, more than 30 OpenAI and Google DeepMind employees — including DeepMind chief scientist Jeff Dean — filed an amicus brief Monday supporting Anthropic’s lawsuit against the Defense Department. The brief argued that the Pentagon’s actions, “if allowed to proceed,” would “undoubtedly have consequences for the United States’ industrial and scientific competitiveness in the field of artificial intelligence and beyond.” The employees signed in their personal capacity, but the spectacle of OpenAI’s own researchers rallying to a competitor’s legal defense against the same government their company just partnered with has no real precedent in the industry.

Altman, to his credit, has not pretended the situation is fine. In an internal memo later shared publicly, he admitted the deal “was definitely rushed” and “just looked opportunistic and sloppy.” He revised the contract to include explicit prohibitions against mass domestic surveillance and the use of OpenAI technology on commercially acquired data. He also publicly said that enforcing the supply-chain risk designation against Anthropic “would be very bad for our industry and our country.”

Advertisement

Meanwhile, Anthropic warned in court filings that the Pentagon’s blacklisting could cost it up to $5 billion in lost business — roughly equivalent to its total revenue since commercializing its AI technology in 2023. The company is seeking a temporary court order to continue working with military contractors while the case proceeds.

Why OpenAI’s $15 billion cash burn makes every user count

Strip away the lawsuits and the politics, and OpenAI still has a math problem of its own.

The company is expected to burn through approximately $15 billion in cash this year, up from $9 billion in 2025. It has roughly 910 million weekly users. About 95% of them pay nothing. Subscriptions alone cannot bridge that gap, which is why OpenAI is simultaneously building out an internal advertising infrastructure and leaning on partners like Criteo — and reportedly The Trade Desk — to bring advertisers into ChatGPT.

The company is hiring aggressively for this effort: a monetization infrastructure engineer, an engineering manager, a product designer for the ads experience, a senior manager for ad revenue accounting, and a trust and safety specialist dedicated to the ads product, all based at headquarters in San Francisco. The compensation bands run as high as $385,000 — the kind of investment a company makes when it plans to own its ad stack, not rent it.

Advertisement

But advertising inside ChatGPT introduces a trust problem that compounds the ones OpenAI is already managing. Users who abandoned the app over the Pentagon deal demonstrated that loyalty to ChatGPT is thinner than its market share suggests. Adding commercial messages to a product already under fire for its military ties and its handling of a mass shooter’s data will require OpenAI to navigate user sentiment with a precision it has not recently demonstrated.

The infrastructure picture is equally unsettled. Oracle and OpenAI recently scrapped plans to expand a flagship AI data center in Abilene, Texas, after negotiations stalled over financing and OpenAI’s evolving needs. Meta and Nvidia moved quickly to explore the site — a reminder that in the current AI arms race, any gap in execution gets filled by a competitor within days.

Why interactive learning is OpenAI’s strongest remaining argument

Beyond the product itself, the education feature carries strategic significance for OpenAI.

Education has always been ChatGPT’s cleanest use case — the application where the technology most obviously augments human capability rather than surveilling it, weaponizing it, or monetizing the attention of people who came looking for help. It is the use case that resonates across demographics: students prepping for the SAT, parents revisiting algebra at the kitchen table, adults circling back to concepts they never quite understood. And it is the use case where ChatGPT still holds a clear lead. Google’s Gemini, Anthropic’s Claude, and xAI’s Grok are all investing in education, but none has shipped anything comparable to real-time interactive formula visualization embedded in a conversational interface.

Advertisement

OpenAI acknowledged that the “research landscape on how AI affects learning is still taking shape,” but pointed to its own early findings on study mode as showing “promising early signals.” The company said it will continue working with educators and researchers through its NextGenAI initiative and OpenAI Learning Lab, and plans to publish findings and expand into additional subjects.

Somewhere tonight, a ninth-grader will open ChatGPT, drag a slider, and watch a hypotenuse lengthen across her screen. The Pythagorean theorem will make sense for the first time. She will not know about the Pentagon deal, or the Tumbler Ridge lawsuit, or the 295% spike in uninstalls, or the $15 billion cash burn underwriting the server that just rendered her triangle. She will only know that it worked. For OpenAI, that may have to be enough — for now.

Source link

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Record number of women founders raising funds, but deal size is down

Published

on

The report shows that women in Ireland, raising funds, have outshined their European peers.

TechIreland, the all-island portal that showcases start-ups and the Irish innovation landscape, has released the Female Founder Funding Review 2026, which tracks investment into women-founded startups throughout 2025. 

The report shows that last year, 82 Irish start-ups being led by women raised a total of €131m, which was recorded as the highest number of women-led start-ups funded in any given year. For comparison in 2025 there were 36 organisations that raised between €0.1m and €0.3m but only eight in 2024. 11 companies raised €18.7m. 

Despite this positive figure however, the average deal size was shown to have significantly declined. In 2024, the average raise was €3.9m, dropping to €2.3m in 2025, with the report suggesting that this is as a result of an increase in the volume of deals being made.  

Advertisement

The median figure also dropped to just €100k last year, compared to €1.5m in 2024, indicating that the divide between the smaller group of large rounds and the large number of very small rounds is widening. The report does say however, that even in this landscape Irish female founders are outshining their European peers in the raising of early-stage funding. 

“While the Dealroom startup ecosystems portal shows a decline in the number of early-stage rounds for women founded start-ups, the trend in Ireland represents a nearly two-fold increase in the number of rounds raised by women founded start-ups last year. Thanks to the heavy lifting by Enterprise Ireland through their focused support for women entrepreneurs.”

TechIreland’s research suggested that angel networks, for example HBAN and AwakenAngels, as well as early-stage accelerator programmes such as Fierce and NextWave, alongside flagship supports such as Enterprise Ireland’s PSSF and HPSU, play a critical role in building a strong platform for women founders.

The report also highlights a key sectoral influence. Funding into the life sciences and healthcare sectors made up almost 70pc of the total funds raised. This was mirrored in wider Europe where health remains a top sector among female founders. 

Advertisement

The enterprise software sector also performed well, growing from €10.7m raised by 10 start-ups in 2024, to €30.7m raised by 22 companies in 2025. Other sectors experiencing growth included the agri/food space, consumer and e-commerce, while cleantech and fintech continue to decline.  

Funding was also disproportionate regionally. Similar to previous years, companies in  Dublin dominated the overall figures. More than 90pc of all funding into start-ups established by women took place at Dublin locations. The report attributed this to the fact that ProVerum, which accounted for nearly half of all funding raised, is a company based in Dublin. 

Commenting on the findings of the report, the chair of TechIreland, Brian Caulfield said, “2025 was an interesting year for female founders from a fundraising perspective. On the face of it, the numbers held up pretty well. 

“While it’s encouraging to see so many female founded companies raising capital, it’s a concern that the market has bifurcated, a very small number of companies raising large rounds, and a very large number of companies raising very small rounds, largely led by Enterprise Ireland. The mid-market of seed and Series A raises is being hollowed out.”

Advertisement

Sarah Walker, who oversees startups and entrepreneurship at Enterprise Ireland said, “The headline TechIreland figure, 82 companies raising in 2025, is almost double last year and the highest level of activity since 2017 which is cause for celebration. 

“While the increased number of women led and co-founded companies raising is encouraging, TechIreland reports total funding levels of €131m in 2025, down from €145m in 2024, reflecting a challenging funding environment.”

Lorraine Curham, the founder of Fierce added, “For Ireland, the next challenge is what comes after that first cheque. In more mature ecosystems, founders are supported not just by programmes, but by strong networks, investor relationships and ecosystem layers that help companies move from early traction into follow-on capital and scale. Ireland has the pipeline. What it needs next is the infrastructure layer to scale it.”

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Tech

Calculator Case To Scratch-Built Pocket E-Reader

Published

on

E-readers are an awesome creation allowing you to display digital information in a way that requires little battery life. While there’s plenty of very impressive models to chose from on the commercial market, it’s also possible to build one yourself — which is exactly what [kaos-69] did in his Mimisbrunnur project, creating a truly unique e-reader from scratch.

While looking through old junk at home, [kaos-69] came across a case that held a calculator and pen at one point in the distant past. The pen was gone and the calculator no longer functioned but the case held promise. He removed the calculator and got some parts on order. For the e-paper display he went with a 5.83-inch unit that just fit inside the spring-loaded case. The Mimisbrunnur is powered by a 2000 mAh LiPo battery, with a micro SD card reader for storing what will be displayed. The brains come from an RP2040 microcontroller on an Adafruit Feather breakout board, which worked out great as it already takes care of battery management and the 24-pin interface for the e-paper display.

There are also eight buttons that live below the display for user interface, and even some LEDs to aid in reading in the dark. The depth of the case allowed all this to be connected with the use of a perfboard and some risers to set the screen forward, allowing the battery to live behind it. Using the Mimisbrunnur is pretty straightforward with the eight buttons sitting below icons on the screen giving you clear guidance on how to turn the page, add a bookmark, or browse the SD card for another file to open.

Advertisement

We’ve seen some impressive DIY e-readers over the years, such as the dual-screen Diptyx and the Open Book. But this project is an excellent reminder that a device doesn’t have to be complex to get the job done.

Source link

Advertisement
Continue Reading

Tech

New BeatBanker Android malware poses as Starlink app to hijack devices

Published

on

New BeatBanker Android malware poses as Starlink app to hijack devices

A new Android malware named BeatBanker can hijack devices and tricks users into installing it by posing as a Starlink app on websites masquerading as the official Google Play Store.

The malware combines banking trojan functions with Monero mining, and can steal credentials, as well as tamper with cryptocurrency transactions.

Kaspersky researchers discovered BeatBanker in campaigns targeting users in Brazil. They also found that the most recent version of the malware deploys the commodity Android remote access trojan called BTMOB RAT, instead of the banking module.

BTMOB RAT provides operators with full device control, keylogging, screen recording, camera access, GPS tracking, and credential-capture capabilities.

Advertisement

Persistence via MP3

BeatBanker is distributed as an APK file that uses native libraries to decrypt and load hidden DEX code directly into memory, for evasion.

Before launching, it performs environment checks to ensure it’s not being analyzed. If passed, it displays a fake Play Store update screen to trick the victims into granting it permissions to install additional payloads.

The fake update message
The fake update message
Source: Kaspersky

To avoid triggering any alarms, BeatBanker delays malicious operations for a period after its installation.

According to Kaspersky, the malware has an unusual method to maintain persistence, which consists of continuously playing a nearly inaudible 5-second recording of Chinese speech from an MP3 file named output8.mp3.

“The KeepAliveServiceMediaPlayback component ensures continuous operation by initiating uninterrupted playback via MediaPlayer,” Kaspersky explains in a report today.

Advertisement

“It keeps the service active in the foreground using a notification and loads a small, continuous audio file. This constant activity prevents the system from suspending or terminating the process due to inactivity.”

Stealthy cryptocurrency mining

BeatBanker uses a modified XMRig miner version 6.17.0, compiled for ARM devices, to mine Monero on Android devices. XMRig connects to attacker-controlled mining pools using encrypted TLS connections, and falls back to a proxy if the primary address fails.

Miner deployment process
Miner deployment process
Source: Kaspersky

The miner can be dynamically started or stopped based on device conditions, which the operators closely monitor to ensure optimal operation and maintain stealth.

Using Firebase Cloud Messaging (FCM), the malware continuously sends the command-and-control (C2) server information about the device’s battery level and temperature, charging status, usage activity, and whether it has overheated.

By stopping mining when the device is in use and by limiting its physical impact, the malware can remain hidden for a longer period, mining for cryptocurrency when conditions allow it.

Advertisement

While Kaspersky observed all BeatBanker infections in Brazil, the malware could expand to other countries if proven effective, so vigilance and good security practices are recommended.

Android users shouldn’t side-load APKs from outside the official Google Play store unless they trust the publisher/distributor, should review granted permissions for risky ones that aren’t relevant to the app’s functionality, and perform regular Play Protect scans.

Malware is getting smarter. The Red Report 2026 reveals how new threats use math to detect sandboxes and hide in plain sight.

Download our analysis of 1.1 million malicious samples to uncover the top 10 techniques and see if your security stack is blinded.

Source link

Advertisement
Continue Reading

Tech

Harvard Business Review Study Finds ‘AI Brain Fry’ Is Leaving Workers Mentally Fatigued

Published

on

Workers who excessively use AI agents and tools at work are at increased risk of mental fatigue, according to a recent Harvard Business Review study. In certain industries, more than 25% of hired professionals report increased mental strain due to their role in AI oversight — though these professionals also generally experienced less burnout than peers who aren’t using AI.

This phenomenon — which the researchers refer to as “AI brain fry” — is described as a “‘buzzing’ feeling or a mental fog” that caused study participants to develop headaches and difficulty focusing and making decisions. Individuals pointed to being overwhelmed by large amounts of information and to frequent task switching as the reasons for these feelings.

Advertisement
AI Atlas

Studied individuals experienced more brain fry when they utilized AI agents to manage a workload beyond their own cognitive capacity. When participants used AI to replace mundane, repetitive tasks, managing the growing number of tools led to increased mental fatigue. 

Crucially, the study found that fewer individuals who used these AI agents reported workplace burnout.

The researchers predict that this is because burnout testing assesses emotional and physical distress. In contrast, they report, acute mental fatigue “is caused by marshalling attention, working memory and executive control beyond the limited capacity of these systems.” 

These are the processes that are taxed when study participants use multiple AI tools in their workflow, according to the researchers.

The Harvard study identifies several business costs incurred by workers suffering from AI brain fry. The foremost consequence is that these individuals may end up making lower-quality decisions. “Workers in [the] study who endorsed AI brain fry experience 33% more decision fatigue than those who did not,” the study reports. Workers who report AI brain fry were also more likely to self-report making both minor and major errors at their jobs.

Advertisement

Another recent Harvard Business Review study similarly found that employees who use AI tools “worked at a faster pace, took on a broader scope of tasks and extended work into more hours of the day,” but warned that “workload creep can in turn lead to cognitive fatigue, burnout and weakened decision-making.”

Source link

Advertisement
Continue Reading

Tech

OnePlus and Oppo to Raise Smartphone Prices as Memory Costs Climb

Published

on

Chinese smartphone-makers OnePlus and Oppo plan to raise prices on some existing models starting next week, according to a 9to5Google report citing GizmoChina and a notice posted on Oppo’s China online store.

In its notice, Oppo said it would adjust pricing after evaluating rising costs for several key components used in its mobile phones. The changes are expected to take effect around March 16 and will affect some of the company’s more affordable smartphones, as well as some OnePlus models. 

Flagship devices — like those in the Find and Reno series — are not expected to be affected for now. The reported adjustments currently appear to be limited to China.

Advertisement

The move highlights growing pressure across the smartphone supply chain as component costs climb. Analysts say prices for memory and storage chips used in phones have been rising in recent months as demand surges across the tech industry. 

Much of the chip demand is coming from the rapid buildout of AI data centers, which rely on large amounts of high-performance memory. 

That pressure isn’t limited to Oppo and OnePlus. Analysts say smartphone brands across the industry are facing rising component costs amid increased demand for memory chips.

As manufacturers shift production toward higher-margin memory used in AI servers, supply for consumer electronics such as smartphones and laptops can tighten. 

Advertisement

If component costs continue to rise, manufacturers may face difficult choices later this year, including raising retail prices or adjusting device specifications to offset higher manufacturing costs.

OnePlus and Oppo didn’t immediately respond to a request for comment.

Source link

Advertisement
Continue Reading

Tech

Anduril snaps up space surveillance firm ExoAnalytic Solutions

Published

on

The first step to fighting a war in space is knowing what’s happening tens of thousands of miles above the planet. Toward that end, defense tech darling Anduril is buying boutique data firm ExoAnalytic Solutions.

ExoAnalytic operates a network of 400 telescopes around the world, which it uses to track spacecraft in high orbits above the planet. The company’s engineers develop software that converts those observations into situational awareness tools for U.S. national security agencies watching adversary spacecraft and coordinating American assets on orbit.

“This is a company we’ve been working with closely for the last several years on a number of programs, and they are experts in space domain awareness and missile defense,” Anduril VP of engineering Gokul Subramanian told reporters. “We believe the [Department of Defense] deserves the best catalog of everything going on in space.”

The privately-held companies did not disclose the terms of the deal. Anduril is in the process of raising a $4 billion round from investors Thrive Capital and Andreessen Horowitz, Reuters reported last week.

Advertisement

ExoAnalytics will be directly integrated into Anduril, not run as a separate subsidiary, though Subramanian said it would continue to serve existing and future outside customers. Currently, Anduril has 120 employees focused on space defense, a number that will more than double with the addition of 130 ExoAnalytics employees.

The company’s technology could help Anduril win government contracts supporting Golden Dome, the missile defense system that the US Congress has appropriated billions of dollars to build. That system is expected to include thousands of satellites to track and target enemy missiles, and maintaining real-time awareness and coordination among them will be a heavy lift.

Anduril is planning to launch three spacecraft this year as internally-funded R&D projects that will draw on capabilities gained in the acquisition. Subramanian said ExoAnalytic’s experience processing space data would be used in an infrared tracking satellite it plans to launch this year in partnership with Apex Space. The space tracking data will be used to execute two missions in high orbit expected to launch this year in partnerships with Impulse Space and Argo Space, respectively.

Techcrunch event

Advertisement

San Francisco, CA
|
October 13-15, 2026

There’s another potential angle to the acquisition — the machine vision algorithms ExoAnalytic has developed to spot satellites in orbit are also useful for interceptors trying to track and engage with incoming threats. Anduril received a contract from the Pentagon in late 2025 to begin developing a space-based missile interceptor.

Advertisement

ExoAnalytic was founded in 2008 to adapt missile defense sensor technology to track spacecraft in orbit after U.S. military officials called for new and better ways to understand what was happening in space, CEO Doug Hendrix said in a 2024 interview. The company’s early growth was funded by grants and contracts from the federal government, including $26 million in SBIR grants since 2010.

U.S. Space Force officials have expressed deep concern about Chinese and Russian spacecraft that fly closely alongside American and European satellites, where they could potentially intercept communications or damage the satellite with electronic or other weapons.

“Two years ago, an [U.S. commander in the Pacific told] me that the fleet cannot leave the port without the space layer being secured,” Subramanian said. “We’ve been on a mission for the last several years to figure out how to be a part of that solution.”

Source link

Advertisement
Continue Reading

Tech

Laptops could soon cost 40% more, and you already know why

Published

on


A recent analysis by TrendForce casts a dark shadow over the future of the most popular machines in the portable PC market. According to the consulting firm, “mainstream” notebooks may soon cost as much as 40% more. Growing challenges in CPU manufacturing are adding yet another layer of uncertainty to…
Read Entire Article
Source link

Continue Reading

Tech

A 1,300-Pound NASA Spacecraft To Re-Enter Earth’s Atmosphere

Published

on

Van Allen Probe A, a 1,300-pound (600 kg) NASA satellite launched in 2012 to study Earth’s radiation belts, is expected to re-enter Earth’s atmosphere this week. While most of it is expected to burn up during descent, “some components may survive,” reports the BBC. “The space agency said there is a one in 4,200 chance of being harmed by a piece of the probe, which it characterized as ‘low’ risk.” From the report: The spacecraft is projected to re-enter around 19:45 EST (00:45 GMT) on Tuesday the U.S. Space Force predicted, according to Nasa, though there is a 24-hour margin of “uncertainty” in the timing. […] The spacecraft and its twin, Van Allen Probe B, were on a mission to gather unprecedented data on Earth’s two permanent radiation belts. It was not immediately clear where in Earth’s atmosphere the satellite is projected to re-enter. NASA and the U.S. Space Force has said it will monitor the re-entry and update any predictions. […] Van Allen Probe B is not expected to re-enter the Earth’s atmosphere before 2030.

Source link

Continue Reading

Tech

Why 2026 will be the year of governed cybersecurity AI

Published

on

The global average cost of a data breach fell to USD 4.44 million in 2025, a 9 per cent drop and the first decline in five years, according to IBM’s Cost of a Data Breach Report. On the surface, that looks like progress. Security AI and automation are finally paying dividends, compressing detection timelines and trimming investigative overhead.

But the headline number obscures a more uncomfortable reality. Organisations with extensive automation reported breach costs nearly USD 1.9 million lower than those relying on manual processes. The gap between leaders and laggards is not closing – it is widening. And the very AI tools driving those savings are introducing a new category of risk that regulators, insurers and boards can no longer ignore.

The automation paradox

Security operations centres have embraced AI with the urgency of an industry running out of analysts. Burnout-driven churn rates exceed 25 per cent annually in many SOC teams, among the highest in IT. Replacing a trained analyst typically takes six to twelve months. The maths is brutal: organisations cannot hire their way to resilience.

Automation was supposed to solve this. And in narrow, well-defined workflows, alert triage, log correlation, repetitive enrichment tasks – it has. The Nextgen 2025/2026 Cybersecurity Trends Report estimates that industry telemetry in 2025 reached 308 petabytes across more than four million identities, endpoints and cloud assets, producing nearly 30 million investigative leads. Analysts confirmed only around 93,000 genuine threats from that mountain, a hit rate of just 0.3 per cent. Without automation, the volume alone would be unmanageable.

Advertisement

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

Yet Gartner’s 2025 Hype Cycle for Security Operations places AI SOC agents at the Peak of Inflated Expectations, warning that claims still outpace sustained, measurable improvement. Initial adoption frequently adds work before it reduces it. False positives and hallucinations remain genuine operational risks. And cost models often limit broad deployment across SOC roles.

The paradox is clear: organisations need AI to cope with the data flood, but ungoverned AI introduces the very blind spots it was meant to eliminate. IBM’s 2025 report found that shadow AI,  staff using unsanctioned generative AI tools to process sensitive data, added an average of USD 670,000 to breach costs where present. A staggering 97 per cent of breached organisations that experienced an AI-related security incident lacked proper AI access controls. Meanwhile, 63 per cent of surveyed organisations admitted they have no AI governance policies in place at all.

Advertisement

The implication is stark. Automation without governance does not reduce risk, it redistributes it. And in a regulatory climate that increasingly demands transparency, ungoverned AI in the SOC is not just a technical liability. It is a compliance exposure.

When alert fatigue becomes a breach vector

The human cost is measurable, and it extends well beyond budget lines. Studies cited in the Nextgen report show SOC teams routinely ignore or dismiss up to 30 per cent of incoming alerts – not through negligence, but necessity. When every alert looks the same and context arrives fragmented across disconnected consoles, skilled analysts are forced to triage by instinct rather than evidence.

The consequences vary by sector, but the pattern repeats. In healthcare, still the costliest industry for breaches at USD 7.42 million per incident and 279 days to contain – alert fatigue is not merely an IT problem. ENISA’s dataset of 215 healthcare incidents between 2021 and 2023 found that 54 per cent involved ransomware, with patient data the primary target in 30 per cent of cases. Hospitals have reported diverted ambulances and delayed surgeries directly tied to stretched staff and clogged detection pipelines.

In manufacturing and energy, where NIS2 enforcement began in 2025, a single day of downtime at a high-throughput plant can cost millions of euros. Adversaries increasingly target industrial control systems by pivoting through poorly segmented IT networks, exploiting exactly the kind of ambiguous, context-dependent alerts that overwhelmed analysts tend to dismiss.

Advertisement

The financial data reinforces the point. Breaches contained in under 200 days averaged USD 3.87 million in 2025, while those stretching beyond that threshold averaged USD 5.01 million. Multi-environment incidents, spanning cloud, SaaS and on-premises infrastructure simultaneously, were costlier still, averaging USD 5.05 million with lifecycles approaching 276 days. The operating environment dictates complexity, and complexity dictates cost.

The lesson from 2025 is that sheer data volume will only increase, but the teams that succeed are those treating correlation and enrichment as architectural necessities rather than optional add-ons.

Europe’s regulatory convergence

Three regulatory frameworks are now converging on a single demand: prove resilience continuously, not just report it after the fact.

The Digital Operational Resilience Act (DORA), which came into force across the EU in January 2025, reframes cybersecurity for financial services around operational resilience during severe IT disruptions. Its reporting requirement is the most disruptive element – institutions must submit incident reports within hours, backed by forensic, audit-grade evidence. Logs must be digitally signed and time-stamped to survive regulator scrutiny months later.

Advertisement

The NIS2 Directive, transposed into national law across Europe in 2024–2025, expanded the regulatory perimeter from seven sectors to eighteen essential and important sectors. In Romania, it was transposed as Law 124/2025, explicitly naming manufacturing as a regulated sector for the first time, forcing production facilities to adopt compliance frameworks on par with hospitals and banks. Under NIS2, boards of directors are directly accountable, with penalties including fines and disqualification from holding directorships in the EU.

And then there is the EU AI Act, whose most substantive obligations take effect on 2 August 2026. High-risk AI systems, a category that encompasses many security automation tools, will need to demonstrate compliance with requirements around risk management, data governance, technical documentation, transparency, human oversight, accuracy, robustness and cybersecurity. Providers must implement technical measures against data poisoning, model evasion and adversarial attacks.

For global financial groups, the complexity multiplies. A single breach may require simultaneous reporting under DORA, GDPR and national frameworks, each with different formats and deadlines. For manufacturers newly brought under NIS2’s scope, the challenge is even more fundamental: many lack the tooling infrastructure to produce compliance-grade evidence at all, let alone under time pressure.

Together, these three frameworks create a regulatory environment where cybersecurity AI cannot simply be effective – it must be auditable, explainable and governed. The question organisations face is no longer “how secure are we?” but “can we demonstrate it to regulators within hours?“. For organisations evaluating platforms built for this regulatory environment, a recent comparison of European SIEM vendors provides additional context.

Advertisement

The case for governed autonomy

This regulatory convergence is reshaping what good security architecture looks like. The industry is shifting from rule-based automation – where playbooks execute predetermined steps, toward what might be called governed autonomy: semi-autonomous SOC operations with built-in compliance guardrails.

In a governed autonomy model, AI does not replace human judgement. It narrows the decision space. Correlation happens at ingestion, collapsing dozens of fragmented alerts into a single enriched case with full audit evidence.

UEBA scoring ranks anomalous identities and assets by risk, so analysts focus on what matters rather than wading through noise. And every investigation timeline doubles as a compliance artefact, digitally signed, framework-mapped and ready for regulator export.

The architectural principle is lean: every security case is simultaneously a compliance case. Analysts investigate once, and the system produces both operational outputs and regulator-ready reports. This avoids the duplication that plagues organisations running separate SIEM, SOAR and compliance tools, each adding cost, latency and integration effort.

Advertisement

European platforms are increasingly built around this philosophy. Romania-based Nextgen Software, for example, designed its CYBERQUEST platform to unify detection, investigation and compliance reporting within a single workflow, so that every enriched case automatically generates the audit trail DORA and NIS2 demand. Its agentless OT monitoring module addresses a gap that matters for manufacturers and utilities: visibility into industrial control systems without deploying intrusive endpoint agents. Similar convergence efforts are visible across the European vendor landscape, from Nordic SIEM providers building compliance-ready exports to German-led initiatives embedding ISO 27001 and NIS2 mappings directly into detection logic.

From assistants to agents – carefully

The next frontier is the move from AI assistants to AI agents systems that do not merely suggest next steps but actively execute detection, investigation and response workflows. It is a transition the industry is approaching with a mixture of ambition and caution.

Vlad Gladin, CTO of Nextgen Software, describes this evolution in practical terms: “Our Cyber Minds AI Personas are evolving from advisory assistants into context-aware investigation agents. Rather than simply recommending a response, these agents will be able to correlate telemetry across identity, network and endpoint data in real time, conduct preliminary forensic analysis, and present analysts with an enriched investigation narrative, not a queue of disconnected alerts. The goal is not to remove the analyst from the loop, but to ensure that when they engage, the context is already assembled.

This mirrors the broader industry trajectory. Gartner recommends treating AI SOC agents as workflow augmentation tools rather than autonomous replacements, with strong emphasis on maintaining human oversight. The concern is legitimate: over-automation introduces risk if agents act on flawed assumptions, and most current use cases remain narrow and task-specific rather than end-to-end.

Advertisement

The governed approach means building trust incrementally. Start with automated enrichment and case assembly. Layer in UEBA-driven prioritisation. Only then extend to semi-autonomous response actions – and always with audit trails that a regulator or insurer can verify after the fact.

There is a reason this incremental model resonates particularly in Europe. The continent’s regulatory landscape rewards demonstrable control over raw capability. An AI agent that can triage a thousand alerts per hour is impressive; an AI agent that can triage a thousand alerts per hour and produce a DORA-compliant incident timeline for each one is bankable. The commercial logic and the regulatory logic are converging on the same architectural requirements.

What 2026 demands

The organisations best positioned for 2026 are not necessarily those with the most advanced AI, but those that can prove their AI is trustworthy. In a landscape where DORA demands forensic evidence within hours, NIS2 holds boards personally liable, and the EU AI Act requires demonstrable governance of high-risk systems, the real differentiator is not speed of detection but speed of demonstrable trust.

This means compliance cannot remain a bolt-on exercise performed quarterly by a separate team. It must be embedded in the detection-to-resolution workflow, generated automatically as a by-product of incident handling. Platforms that deliver audit-ready evidence as a natural output of operations, rather than requiring analysts to reconstruct it after the fact, will set the new standard.

Advertisement

The cybersecurity industry spent the past decade racing to automate. In 2026, the race shifts to governing that automation, proving to regulators, insurers and boards that the machines defending the network are themselves accountable. The winners will not be the organisations with the most AI. They will be the ones whose AI can show its working.

Source link

Advertisement
Continue Reading

Tech

Listeners rated a Chinese startup’s AI voices more realistic and trustworthy than those from Microsoft, Google, and Amazon

Published

on


A new global study suggests people stop trusting AI voices the moment they realize the voice isn’t human, which creates a big problem for companies that use synthetic voices in customer service and other public-facing systems.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025