Connect with us

Tech

Teaching Machines to Spot Human Errors in Math Assignments

Published

on

When completing math problems, students often have to show their work. It’s a method teachers use to catch errors in thinking, to make sure students are grasping mathematical concepts correctly.

New AI projects in development aim to automate that process. The idea is to train machines to catch and predict the errors students make when studying math, to better enable teachers to correct student misconceptions in real time.

For the first time ever, developers can now build fascinating algorithms into products that will help teachers without requiring them to understand machine learning, says Sarah Johnson, CEO at Teaching Lab, which provides professional development to teachers.

Some of these efforts trace back to the U.K.-based edtech platform Eedi Labs, which has held a series of coding competitions since 2020 intended to explore ways to use AI to boost math performance. The latest was held earlier this year, and it tried to use AI to capture misconceptions from multiple choice questions and accompanying student explanations. It relied upon Eedi Labs’ data but was run by The Learning Agency, an education consultancy firm in the U.S. A joint project with Vanderbilt University — and using Kaggle, a data science platform — the competition received support from the Gates Foundation and the Walton Family Foundation, and coding teams competed for $55,000 in awards.

Advertisement

The latest competition achieved “impressive” accuracy in predicting student misconceptions in math, according to Eedi Labs.

Researchers and edtech developers hope this kind of breakthrough can help bring useful AI applications into math classrooms — which have lagged behind in AI adoption, even as English instructors have had to rethink their writing assignments to account for student AI use. Some people have argued that, so far, there has been a conceptual problem with “mathbots.”

Perhaps training algorithms to identify common student math misconceptions could lead to the development of sophisticated tools to help teachers target instruction.

But is that enough to improve students’ declining math scores?

Advertisement

Solving the (Math) Problem

So far, the deluge of money pouring into artificial intelligence is unrelenting. Despite fears that the economy is in an “AI bubble”, edtech leaders hope that smart, research-backed uses of the technology will deliver gains for students.

In the early days of generative AI, people thought you could get good results by just hooking up an education platform to a large language model, says Johnson, of Teaching Lab. All these chatbot wrappers popped up, promising that teachers could create the best lesson plans using ChatGPT in their learning management systems.

But that’s not true, she says. You need to focus on applications of the technology that are trained on education-specific data to actually help classroom teachers, she adds.

That’s where Eedi Labs is trying to make a difference.

Advertisement

Currently, Eedi Labs sells an AI tutoring service for math. The model, which the company calls “human in the loop,” has human tutors check messages automatically generated by its platform before they are sent to students, and make edits when necessary.

Plus, through efforts like its recent competition, leaders of the platform think they can train machines to catch and predict the errors students make when studying math, further expediting learning.

But training machine learning algorithms to identify common math misconceptions a student holds isn’t all that easy.

Cutting Edge?

Whether these attempts to use AI to map student misconceptions prove useful depends on what computer scientists call “ground truth,” the quality of the data used to train the algorithms in the first place. That means it depends on the quality of the multiple choice math problem questions, and also of the misconceptions that those questions reveal, says Jim Malamut, a postdoctoral researcher at Stanford Graduate School of Education. Malamut is not affiliated with Eedi Labs or with The Learning Agency’s competition.

Advertisement

The approach in the latest competition is not groundbreaking, he argues.

The dataset used in this year’s misconceptions contest had teams sorting through student answers from multiple choice questions with brief rationales from students. For the company, it’s an advancement, since previous versions of the technology relied on multiple choice questions alone.

Still, Malamut describes the use of multiple choice questions as “curious” because he believes the competition chose to work with a “simplistic format” when the tools they are testing are better-suited to discern patterns in more complex and open-ended answers from students. That is, after all, an advantage of large language models, Malamut says. In education, psychometricians and other researchers relied on multiple choice questions for a long time because they are easier to scale, but with AI that shouldn’t be as much of a barrier, Malamut argues.

Pushed by declining U.S. scores on international assessments, in the last decade-plus the country has shifted toward “Next-Generation Assessments” which aim to test conceptual skills. It’s part of a larger shift by researchers to the idea of “assessment for learning,” which holds that assessment tools place emphasis on getting information that’s useful for teaching rather than what’s convenient for researchers to measure, according to Malamut.

Advertisement

Yet the competition relies on questions that clearly predate that trend, Malamut says, in a way that might not meet the moment

For example, some questions asked students to figure out which decimal was the largest, which sheds very little light on conceptual understanding. Instead, current research suggests that it’s better to have students write a decimal number using base 10 blocks or to point to missing decimals on a marked number line. Historically, these sorts of questions couldn’t be used in a large-scale assessment because they are too open-ended, Malamut says. But applying AI to current thinking around education research is precisely where AI could add the most value, Malamut adds.

But for the company developing these technologies, “holistic solutions” are important.

Eedi Labs blends multiple choice questions, adaptive assessments and open responses for a comprehensive diagnosis, says cofounder Simon Woodhead. This latest competition was the first to incorporate student responses, enabling deeper analysis, he adds.

Advertisement

But there’s a trade-off between the time it takes to give students these assessments and the insights they give teachers, Woodhead says. So the Eedi team thinks that a system that uses multiple choice questions is useful for scanning student comprehension inside a classroom. With just a device at the front of the class, a teacher can home in on misconceptions quickly, Woodhead says. Student explanations and adaptive assessments, in contrast, help with deeper analysis of misconceptions. Blending these gives teachers the most benefit, Woodhead argues. And the success of this latest competition convinced the company to further explore using student responses, Woodhead adds.

Still, some think the questions used in the competition were not fine-tuned enough.

Woodhead notes that the competition relied on broader definitions of what counts as a “misconception” than Eedi Labs usually does. Nonetheless, the company was impressed by the accuracy of the AI predictions in the competition, he says.

Others are less sure that it really captures student misunderstandings.

Advertisement

Education researchers now know a lot more about the kinds of questions that can get to the core of student thinking and reveal misconceptions that students may have than they used to, Malamut says. But many of the questions in the contest’s dataset don’t accomplish this well, he says. Even though the questions included multiple choice options and short answers, it could have used better-formed questions, Malamut thinks. There are ways to ask the questions that can bring out student ideas. Rather than asking students to answer a question about fractions, you could ask students to critique others’ reasoning processes. For example: “Jim added these fractions in this way, showing his work like this. Do you agree with him? Why or why not? Where did he make a mistake?”

Whether it’s found its final form, there is growing interest in these attempts to use AI, and that comes with money for exploring new tools.

From Computer Back to Human

The Trump administration is betting big on AI as a strategy for education, making federal dollars available. Some education researchers are enthusiastic, too, boosted by $26 million in funding from Digital Promise intended to help narrow the distance between best practices in education and AI.

These approaches are early, and the tools still need to be built and tested. Nevertheless, some argue it’s already paying off.

Advertisement

A randomized controlled trial conducted by Eedi Labs and Google DeepMind found that math tutoring that incorporated Eedi’s AI platform boosted student learning in 11- and 12-year-olds in the U.K. The study focused on the company’s “human in the loop” approach — using human-supervised AI tutoring — currently used in some classrooms. Within the U.S., the platform is used by 4,955 students across 39 K-12 schools, colleges and tutoring networks. Eedi Labs says it is conducting another randomized controlled trial in 2026 with Imagine Learning in the U.S.

Others have embraced a similar approach. For example, Teaching Lab is actively involved in work about AI for use in classrooms, with Johnson telling EdSurge that they are testing a model also based on data borrowed from Eedi and a company called Anet. That data model project is currently being tested with students, according to Johnson.

Several of these efforts require sharing tech insights and data. That runs counter to many companies’ typical practices for protecting intellectual property, according to the Eedi Labs CEO. But he thinks the practice will pay off. “We are very keen to be at the cutting edge, that means engaging with researchers, and we see sharing some data as a really great way to do this,” he wrote in an email.

Still, once the algorithms are trained, everyone seems to agree turning it into success in classrooms is another challenge.

Advertisement

What might that look like?

The data infrastructure can be built into products that let teachers modify curriculum based on the context of the classroom, Johnson says. If you can connect the infrastructure to student data and allow it to make inferences, it could provide teachers with useful advice, she adds.

Meg Benner, managing director of The Learning Agency, the organization that ran the misconceptions contest, suggests that this could be used to feed teachers information about which misconceptions their students are making, or to even trigger a chatbot-style lesson helping them to overcome those misconceptions.

It’s an interesting research project, says Johnson, of Teaching Lab. But once this model is fully built, it will still need to be tested to see if refined diagnosis actually leads to better interventions in front of teachers and students, she adds.

Advertisement

Some are skeptical that the ways companies will turn these into products may not enhance learning all that much. After all, having a chatbot-style tutoring system conclude that students are conducting additive reasoning when multiplicative reasoning is required may not transform math instruction. Indeed, some research has shown that students don’t respond well to chatbots. For instance, the famous 5 percent problem revealed that only the top students usually see results from most digital math programs. Instead, teachers have to handle misconceptions as they come up, some argue. That means students having an experience or conversation that exposes the limits of old ideas and the power of clear thinking. The challenge, then, is figuring out how to get the insights from the computer and machine analysis back out to the students.

But others think that the moment is exciting, even if there’s some hype.

“I’m cautiously optimistic,” says Malamut, the postdoctoral student at Stanford. Formative assessments and diagnostic tools exist now, but they are not automated, he says. True, the assessment data that’s easy to collect isn’t always the most helpful to teachers. But if used correctly, AI tools could possibly close that gap.

Source link

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Embracing unconventional talent with Tenable’s Thomas Parsons

Published

on

The latest episode of The Leaders’ Room podcast season four features Thomas Parsons, head of Tenable in Ireland and VP of product management. This series is created in partnership with IDA Ireland.

Once again in season four of The Leaders’ Room podcast, we get to know the leaders of some of the most influential multinationals in tech, life sciences and innovation, as well as getting insights into their leadership styles and the high-tech trends they see coming down the line.

In this latest episode, we speak to Thomas Parsons, who heads up threat exposure cybersecurity company Tenable in Ireland as well as serving as VP of product management for the Maryland-based company, about a career that parallels the evolution of the cyber threat landscape since the ’90s, and a style of leadership that spots the best cybersecurity talent sometimes in the least expected places.

Advertisement

It’s a fascinating listen charting Parsons’ distinguished career in cybersecurity, from Symantec, via Intel to Tenable which he joined shortly after the exposure management company arrived in Ireland in 2016. Parsons was indeed Tenable’s first R&D hire in 2017, and today some 50pc of the 140-strong Irish team work on the R&D side in Dublin, he tells us.

In a series on leadership, Parsons got to work at Symantec at a time when a renowned leader took the reins – John Thompson. A longtime IBM alumnus, Thompson radically changed how Symantec thought about security for organisations, at a time when large-scale cybercrime and nation-state attacks were not yet on the agenda. Under Obama, Thompson was considered for the role of Secretary of Commerce, and Nancy Pelosi appointed him to the Financial Crisis Inquiry Commission in 2009.

Parson also got to experience first-hand the process of Tenable going public, including being on the ground in New York on the day it did so. His thoughts on how good leaders in his sector need to keep an eye out for talent in places you might least expect are well worth a listen as are his insights on what might be coming down the line.

We’re grateful to all our interviewees again this season, for taking the time out of busy schedules to come into the studio and share their insights and their intelligence with us. And a big thanks as ever to our partners IDA Ireland who make this series possible.

Advertisement

The Leaders’ Room podcast is released fortnightly and can be found by searching for ‘The Leaders’ Room’ wherever you get your podcasts. For those who prefer their audio with visuals, filmed versions of the podcast interviews are all available here on SiliconRepublic.com.

Check out The Leaders’ Room podcast for in-depth insights from some of Ireland’s top leaders. Listen now on Spotify, on Apple or wherever you get your podcasts.

Source link

Advertisement
Continue Reading

Tech

Free Bi-Directional EV Chargers Tested to Improve Massachusetts Power Grid

Published

on

Somewhere on America’s eastern coast, there’s an economic development agency in Massachusetts promoting green energy solutions. And Monday the Massachusetts Clean Energy Center (or MassCEC) announced “a first-of-its-kind” program to see what happens when they provide free electric vehicle chargers to selected residents, school districts, and municipal projects.

The catch? The EV chargers are bi-directional, able “to both draw power from and return power to the grid…” The program hopes to “accelerate the adoption of V2X technologies, which, at scale, can lower energy bills by reducing energy demand during expensive peak periods and limiting the need for new grid infrastructure.”

This functionality enables EVs, including electric buses and trucks, to provide backup power during outages and alleviate pressure on the grid during peak energy demand. These bi-directional chargers will enable EVs to act as mobile energy storage assets, with the program expected to deliver over one megawatt of power back to the grid during a demand response event — enough to offset the electricity use of 300 average American homes for an hour. “Virtual Power Plants are the future of our electrical grid, and I couldn’t be more excited to see this program take off,” said Energy and Environmental Affairs Secretary Rebecca Tepper. “We’re putting the power of innovation directly in the hands of Massachusetts residents. Bi-directional charging unlocks new ways to protect communities from outages and lower costs for families and public fleets….”

Additionally, the program will help participants enroll in existing utility programs that offer compensation to EV owners who supply power back to the grid during peak times, helping participants further lower their electricity costs. By leveraging distributed energy resources and reducing grid strain, this program positions Massachusetts as a national leader in clean energy innovation.

Advertisement

Source link

Continue Reading

Tech

Amazon’s big bet, a ‘MySpace for bots,’ and a conversation with AI veteran Oren Etzioni

Published

on

This week on the GeekWire Podcast: Andy Jassy tells Wall Street that Amazon is planning $200 billion in capital expenses this year, mostly to build out AI infrastructure, and investors give it a thumbs down.

Microsoft’s financial results beat expectations but the company loses $357 billion in market value in a single day after investors learn the extent of its dependence on OpenAI.

Meanwhile, OpenAI leases 10 floors of office space in Bellevue, lawmakers in Olympia propose new taxes impacting startup exits and high-income earners, and the bots get their own social network. 

In our featured conversation, recorded at a dinner hosted by Accenture in Bellevue, GeekWire co-founder Todd Bishop sits down with computer scientist and entrepreneur Oren Etzioni to talk about AI agents, the startup landscape, the fight against deepfakes, and what good AI leadership looks like.

Etzioni is co-founder of AI agent startup Vercept, founder of the AI2 Incubator, professor emeritus at the UW Allen School, venture partner at Madrona, and the former founding CEO of the Allen Institute for AI.

Advertisement

“Moltbook is to agent networks as Myspace was to social networks,” he posted on LinkedIn. “It’s a sign of what’s to come, and will soon be supplanted by more secure and more pervasive alternatives.”

Upcoming GeekWire Podcast Live Event: Join us from 4 p.m. to 6 p.m. Thursday, Feb 12 at Fremont Brewing for a live recording of the GeekWire Podcast with Todd Bishop and John Cook. Free for Fremont Chamber members, $15 otherwise. Register here.

Agents of Transformation: Check out the series and join us for the conference, presented by Accenture, March 24 in Seattle.

With GeekWire co-founder Todd Bishop. Edited by Curt Milton. Music by Daniel L.K. Caldwell.

Advertisement

Source link

Continue Reading

Tech

AutoFlight Matrix Touted as World’s First 5-Ton Class Heavy Lift eVTOL Drone

Published

on

AutoFlight Matrix First 5-Ton eVTOL
Autoflight’s Matrix, which is a game changer in the eVTOL category of the aviation world, is the first of its kind to exceed the 5-ton class, with a maximum takeoff weight of 5,700kg (about 12,566lbs). While the plane’s wingspan of 20 meters, length of 17.1 meters, and short stature of only 3.3 meters may suggest that it is a large craft, once inside, you’ll notice that the cabin itself is surprisingly roomy with its 5.25-meter length, 1.8-meter width, and 1.85-meter height, giving you a comfortable 13.9 cubic meters in which to spread out.



To get off the ground, Autoflight engineers designed a novel solution: a compound wing lift-and-cruise system in triplane style, combined with a six-arm framework that can draw on up to 20 engines, just in case. As a result, the transition from vertical launch to forward flight goes relatively smoothly. Most importantly, the aircraft can maintain flight even if one or both engines fail.


DJI Neo 2 (Drone Only), Lightweight & Foldable 4K Drone With Camera, Palm Takeoff & Landing, Gesture…
  • Lightweight & Portable Design – Weighing just 151g [9] and C0 certified, this compact drone features full-coverage propeller guards for safer,…
  • Palm Takeoff & Landing [1], Gesture Control [2] – Enjoy easy palm takeoff and landing, plus intuitive gesture controls for hands-free operation and…
  • Smooth & Reliable Tracking – ActiveTrack [3] keeps your subject in focus, while Apple Watch lets you view live feed, check flight status, or use voice…

There are several Matrix variations to pick from. The totally electric vehicle has a range of 250 kilometers (155 miles) for short journeys. The hybrid-electric variant, as expected, has a longer range of 1,500 kilometers (932 miles). Let’s just say that getting from A to B takes precedence over speed.

As one might expect, it comes with a host of amenities: 10 comfortable business class seats or 6 VIP seats, all with climate control, beautiful ambient lighting, wide windows to enjoy the view from above, and, yeah, a proper bathroom because, you know, priorities. And if you need to carry some luggage, this aircraft can carry up to 1,500kg (3,300lb) via the large forward opening door, which is also beneficial for the hybrid system.

Advertisement

AutoFlight Matrix First 5-Ton eVTOL
Autoflight tested the Matrix prototype at its low-altitude test site in Kunshan, China, in February 2026. To establish a fact, the company executed a slick move in which the eVTOL lifted off vertically, transitioned from vertical to cruising mode, and then descended vertically. This makes the Matrix the first 5-ton eVTOL to achieve this feat. Furthermore, the eVTOL operated alongside the company’s smaller, 2-ton carry-all cargo eVTOL design.

AutoFlight Matrix First 5-Ton eVTOL
So, what does Autoflight’s Matrix offer? Well, the company believes it’s great for regional transit, large freight, and emergency response operations. As Tian Yu, the company’s founder and CEO, says, it will be a game changer, propelling eVTOLs beyond the normal short excursions and light cargoes. He believes that by improving the capabilities of eVTOLs, they will be able to reduce costs per seat or ton, which might be a significant advantage.
[Source]

Source link

Continue Reading

Tech

Hosting the Super Bowl? This 77″ OLED TV deal is the upgrade people will notice

Published

on

If you’re going to upgrade your TV for the Super Bowl, this is the kind of deal that actually changes the experience, not just the number on the spec sheet. A 77-inch OLED is the “everyone on the couch can see everything clearly” size, and OLED is the tech that makes the biggest difference on broadcast-style content: strong contrast, clean highlights, and better-looking motion in fast action.

Right now, the Samsung 77-inch S90F Series OLED (2025) is $1,999.99, which is $1,500 off the $3,499.99 compared value. The key detail is the deadline: the deal ends February 9, 2026, so this is very much a “plan your setup now” situation.

What you’re getting

This is a 77-inch 4K OLED with Samsung’s Tizen smart platform and SamsungVision AI branding around picture processing and smart features. The practical benefit is simple: OLED’s pixel-level control delivers deep blacks and strong contrast, which helps games look more dimensional, especially in mixed lighting.

For the Super Bowl specifically, a screen this size is great for the details that matter: jersey textures, sideline action, the ball in motion, and those quick camera cuts that can look smeary on older TVs. With a modern OLED panel, the picture tends to look cleaner and more premium without you needing to crank settings to extremes.

Advertisement

Why it’s worth it

The real story here is value per inch for a premium display. At $1,999.99, you’re getting into “big statement TV” territory while still landing in a price band that’s far more approachable than most 77-inch OLED pricing historically.

It also helps that the timing lines up perfectly with a common buying moment. If you host, even casually, a TV like this does a lot of the heavy lifting. You do not need fancy décor or a full surround system to make the room feel upgraded. A 77-inch OLED becomes the focal point instantly.

The bottom line

At $1,999.99, the Samsung 77-inch S90F OLED is a standout deal for anyone who wants a huge, premium screen ahead of the Super Bowl. The size is legitimately immersive, OLED is a visible upgrade, and saving $1,500 is the kind of discount that justifies moving now instead of “someday.” Just remember the deadline: this deal ends February 9, 2026.

Advertisement

Source link

Continue Reading

Tech

How to factory reset AirTag 2

Published

on

Whether you’re handing off an AirTag or trying to resolve pairing issues, knowing how to properly reset Apple’s item tracker ensures you can set it up on a new iPhone easily.

Hands holding a shiny Apple AirTag tracker above a gray fabric surface, with two other white circular AirTags lying nearby
How to factory reset a second generation AirTag

Every AirTag can be associated with only a single Apple Account. If you want to gift your AirTag to another person, you’ll need to reset it. While this does take a little effort, the whole process can be done in about a minute.
Before you get started, we highly recommend that all small children and pets are out of the area while you reset an AirTag. AirTags, for all their usefulness, are choking hazards and can cause internal damage if they pass through the digestive tract.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

State actor targets 155 countries in ‘Shadow Campaigns’ espionage op

Published

on

State actor targets 155 countries in 'Shadow Campaigns' espionage op

A state-sponsored threat group has compromised dozens of networks of government and critical infrastructure entities in 37 countries in global-scale operations dubbed ‘Shadow Campaigns’.

Between November and December last year, the actor also engaged in reconnaissance activity targeting government entities connected to 155 countries.

According to Palo Alto Networks’ Unit 42 division, the group has been active since at least January 2024, and there is high confidence that it operates from Asia. Until definitive attribution is possible, the researchers track the actor as TGR-STA-1030/UNC6619.

Wiz

‘Shadow Campaigns’ activity focuses primarily on government ministries, law enforcement, border control, finance, trade, energy, mining, immigration, and diplomatic agencies.

Unit 42 researchers confirmed that the attacks successfully compromised at least 70 government and critical infrastructure organizations across 37 countries.

Advertisement

This includes organizations engaged in trade policy, geopolitical issues, and elections in the Americas; ministries and parliaments across multiple European states; the Treasury Department in Australia; and government and critical infrastructure in Taiwan.

Targeted countries (top) and confirmed compromises (bottom)
Targeted countries (top) and confirmed compromises (bottom)
Source: Unit 42

The list of countries with targeted or compromised organizations is extensive and focused on certain regions with particular timing that appears to have been driven by specific events.

The researchers say that during the U.S. government shutdown in October 2025, the threat actor showed increased interest in scanning entities across North, Central and South America (Brazil, Canada, Dominican Republic, Guatemala, Honduras, Jamaica, Mexico, Panama, and Trinidad and Tobago).

Significant reconnaissance activity was discovered against “at least 200 IP addresses hosting Government of Honduras infrastructure” just 30 days before the national election, as both candidates indicated willingness to restore diplomatic ties with Taiwan.

Unit 42 assesses that the threat group compromised the following entities:

Advertisement
  • Brazil’s Ministry of Mines and Energy
  • the network of a Bolivian entity associated with mining
  • two of Mexico’s ministries
  • a government infrastructure in Panama
  • an IP address that geolocates to a Venezolana de Industria Tecnológica facility
  • compromised government entities in Cyprus, Czechia, Germany, Greece, Italy, Poland, Portugal, and Serbia
  • an Indonesian airline
  • multiple Malaysian government departments and ministries
  • a Mongolian law enforcement entity
  • a major supplier in Taiwan’s power equipment industry
  • a Thai government department (likely for economic and international trade information)
  • critical infrastructure entities in the Democratic Republic of the Congo, Djibouti, Ethiopia, Namibia, Niger, Nigeria, and Zambia

Unit 42 also believes that TGR-STA-1030/UNC6619 also tried to connect over SSH to infrastructure associated with Australia’s Treasury Department, Afghanistan’s Ministry of Finance, and Nepal’s Office of the Prime Minister and Council of Ministers.

Apart from these compromises, the researchers found evidence indicating reconnaissance activity and breach attempts targeting organizations in other countries.

They say that the actor scanned infrastructure connected to the Czech government (Army, Police, Parliament, Ministries of Interior, Finance, Foreign Affairs, and the president’s website).

The threat group also tried to connect to the European Union infrastructure by targeting more than 600 IP hosting *.europa.eu domains. In July 2025, the group focused on Germany and initiated connections to more than 490 IP addresses that hosted government systems.

Shadow Campaigns attack chain

Early operations relied on highly tailored phishing emails sent to government officials, with lures commonly referencing internal ministry reorganization efforts.

Advertisement

The emails embedded links to malicious archives with localized naming hosted on the Mega.nz storage service. The compressed files contained a malware loader called Diaoyu and a zero-byte PNG file named pic1.png.

Sample of the phishing email used in Shadow Campaigns operations
Sample of the phishing email used in Shadow Campaigns operations
Source: Unit 42

Unit 42 researcher found that the Diaoyu loader would fetch Cobalt Strike payloads and the VShell framework for command-and-control (C2) under certain conditions that equate to analysis evasion checks.

“Beyond the hardware requirement of a horizontal screen resolution greater than or equal to 1440, the sample performs an environmental dependency check for a specific file (pic1.png) in its execution directory,” the researchers say.

They explain that the zero-byte image acts as a file-based integrity check. In its absence, the malware terminates before inspecting the compromised host.

To evade detection, the loader looks for running processes from the following security products: Kaspersky, Avira, Bitdefender, Sentinel One, and Norton (Symantec).

Advertisement

Apart from phishing, TGR-STA-1030/UNC6619 also exploited at least 15 known vulnerabilities to achieve initial access. Unit 42 found that the threat actor leveraged security issues in SAP Solution Manager, Microsoft Exchange Server, D-Link, and Microsoft Windows.

New Linux rootkit

TGR-STA-1030/UNC6619’s toolkit used for Shadow Campaigns activity is extensive and includes webshells such as Behinder, Godzilla, and Neo-reGeorg, as well as network tunneling tools such as GO Simple Tunnel (GOST), Fast Reverse Proxy Server (FRPS), and IOX.

However, researchers also discovered a custom Linux kernel eBPF rootkit called ‘ShadowGuard’ that they believe to be unique to the TGR-STA-1030/UNC6619 threat actor.

“eBPF backdoors are notoriously difficult to detect because they operate entirely within the highly trusted kernel space,” the researchers explain.

Advertisement

“This allows them to manipulate core system functions and audit logs before security tools or system monitoring applications can see the true data.”

ShadowGuard conceals malicious process information at the kernel level, hides up to 32 PIDs from standard Linux monitoring tools using syscall interception. It can also hide from manual inspection files and directories named swsecret.

Additionally, the malware features a mechanism that lets its operator define processes that should remain visible.

The infrastructure used in Shadow Campaigns relies on victim-facing servers with legitimate VPS providers in the U.S., Singapore, and the UK, as well as relay servers for traffic obfuscation, and residential proxies or Tor for proxying.

Advertisement

The researchers noticed the use of C2 domains that would appear familiar to the target, such as the use of .gouv top-level extension for French-speaking countries or the dog3rj[.]tech domain in attacks in the European space.

“It’s possible that the domain name could be a reference to ‘DOGE Jr,’ which has several meanings in a Western context, such as the U.S. Department of Government Efficiency or the name of a cryptocurrency,” the researchers explain.

According to Unit 42, TGR-STA-1030/UNC6619 represents an operationally mature espionage actor who prioritizes strategic, economic, and political intelligence and has already impacted dozens of governments worldwide.

Unit 42’s report includes indicators of compromise (IoCs) at the bottom of the report to help defenders detect and block these attacks.

Advertisement

Modern IT infrastructure moves faster than manual workflows can handle.

In this new Tines guide, learn how your team can reduce hidden manual delays, improve reliability through automated response, and build and scale intelligent workflows on top of tools you already use.

Source link

Continue Reading

Tech

Facial Recognition Tech Used To Hunt Migrants Was Deployed Without Required Privacy Paperwork

Published

on

from the shoot-first,-ask-questions-never dept

In the grand scheme of things — the wanton cruelty, the routine violations of rights, the actual fucking murders — this may only seem like a blip on the mass deportation continuum. But this report from Dell Cameron for Wired is still important. It not only explains why federal officers are approaching people with cellphones drawn nearly as often as they’re approaching them with guns drawn, but also shows the administration is yet again pretending it’s a law unto itself.

On Wednesday, the Department of Homeland Security published new details about Mobile Fortify, the face recognition app that federal immigration agents use to identify people in the field, undocumented immigrants and US citizens alike. The details, including the company behind the app, were published as part of DHS’s 2025 AI Use Case Inventory, which federal agencies are required to release periodically.

The inventory includes two entries for Mobile Fortify—one for Customs and Border Protection (CBP), another for Immigration and Customs Enforcement (ICE)—and says the app is in the “deployment” stage for both. CBP says that Mobile Fortify became “operational” at the beginning of May last year, while ICE got access to it on May 20, 2025. That date is about a month before 404 Media first reported on the app’s existence.

A lot was going on last May, in terms of anti-migrant efforts and the casual refusal to recognize long-standing constitutional rights. That was the same month immigration officers were told they could enter people’s homes while only carrying self-issued “administrative warrants,” which definitely aren’t the same thing as the judicial warrants the government actually needs to enter areas provided the utmost in Fourth Amendment protection.

The app federal officers are using is made by NEC, a tech company that’s been around since long before ICE and CBP become the mobile atrocities they are. Prior to this revelation, NEC had only been associated with developing biometric software with an eye on crafting something that could be swiftly deployed and just as quickly scaled to meet the government’s needs. This particular app was never made public prior to this.

Advertisement

ICE claims it’s not a direct customer. It’s only a beneficiary of the CBP’s existing contract with NEC. That’s a meaningless distinction when multiple federal agencies have been co-opted into the administration’s bigoted push to rid the nation of brown people.

As is always the case (and this precedes Trump 2.0), CBP and ICE are rolling out tech far ahead of the privacy impact paperwork that’s supposed to filed before anything goes live.

While CBP says there are “sufficient monitoring protocols” in place for the app, ICE says that the development of monitoring protocols is in progress, and that it will identify potential impacts during an AI impact assessment. According to guidance from the Office of Management and Budget, which was issued before the inventory says the app was deployed for either CBP or ICE, agencies are supposed to complete an AI impact assessment before deploying any high-impact use case. Both CBP and ICE say the app is “high-impact” and “deployed.”

This is standard operating procedure for the federal government. The FBI and DEA were deploying surveillance tech well ahead of Privacy Impact Assessments (PIAs) as far back as [oh wow] 2014, while the nation was still being run by someone who generally appeared to be a competent statesman. That nothing has changed since makes it clear this problem is endemic.

But things are a bit worse now that Trump is running an administration stocked with fully-cooked MAGA acolytes. In the past, our rights might have received a bit of lip service and the occasional congressional hearing about the lack of required Privacy Impact Assessments.

Advertisement

None of that will be happening now. No one in the DHS is even going to bother to apply pressure to those charged with crafting these assessments. And no one will threaten (much less terminate) the tech deployment until these assessments have been completed. I would fully expect this second Trump term to come and go without the delivery of legally-required paperwork, especially since oversight of these agencies will be completely nonexistent as long as the GOP holds a congressional majority.

We lose. The freshly stocked swamp wins. And while it’s normal to expect the federal government to bristle at the suggestion of oversight, it’s entirely abnormal to allow an administration that embraces white Christian nationalism to act as though the only holy text any Trump appointee subscribes to was handed down by Aleister Crowley: Do what thou wilt. That is the whole of the law.

Filed Under: border patrol, cbp, dhs, facial recognition tech, ice, mass deportation, surveillance, trump administration

Companies: mobile fortify, nec

Advertisement

Source link

Continue Reading

Tech

3D Modeling Made Accessible for Blind Programmers

Published

on

Most 3D design software requires visual dragging and rotating—posing a challenge for blind and low-vision users. As a result, a range of hardware design, robotics, coding, and engineering work is inaccessible to interested programmers. A visually-impaired programmer might write great code. But because of the lack of accessible modeling software, the coder can’t model, design, and verify physical and virtual components of their system.

However, new 3D modeling tools are beginning to change this equation. A new prototype program called A11yShape aims to close the gap. There are already code-based tools that let users describe 3D models in text, such as the popular OpenSCAD software. Other recent large-language-model tools generate 3D code from natural-language prompts. But even with these, blind and low-vision programmers still depend on sighted feedback to bridge the gap between their code and its visual output.

Blind and low-vision programmers previously had to rely on a sighted person to visually check every update of a model to describe what changed. But with A11yShape, blind and low-vision programmers can independently create, inspect, and refine 3D models without relying on sighted peers.

A11yShape does this by generating accessible model descriptions, organizing the model into a semantic hierarchy, and ensuring every step works with screen readers.

Advertisement

The project began when Liang He, assistant professor of computer science at the University of Texas at Dallas, spoke with his low-vision classmate who was studying 3D modeling. He saw an opportunity to turn his classmate’s coding strategies, learned in a 3D modeling for blind programmers course at the University of Washington, into a streamlined tool.

“I want to design something useful and practical for the group,” he says. “Not just something I created from my imagination and applied to the group.”

Re-imagining Assistive 3D Design With OpenSCAD

A11yShape assumes the user is running OpenSCAD, the script-based 3D modeling editor. The program adds OpenSCAD features to connect each component of modeling across three application UI panels.

OpenSCAD allows users to create models entirely through typing, eliminating the need for clicking and dragging. Other common graphics-based user interfaces are difficult for blind programmers to navigate.

Advertisement

A11yshape introduces an AI Assistance Panel, where users can submit real-time queries to ChatGPT-4o to validate design decisions and debug existing OpenSCAD scripts.

AllyShape's 3-D modeling web interface, featuring a code editor panel with programming capabilities, an AI assistance panel providing contextual feedback, and a model panel displaying hierarchical structure and rendering of the resulting model. A11yShape’s three panels synchronize code, AI descriptions, and model structure so blind programmers can discover how code changes affect designs independently.Anhong Guo, Liang He, et al.

If a user selects a piece of code or a model component, A11yShape highlights the matching part across all three panels and updates the description, so blind and low-vision users always know what they’re working on.

User Feedback Improved Accessible Interface

The research team recruited 4 participants with a range of visual impairments and programming backgrounds. The team asked the participants to design models using A11yShape and observed their workflows.

One participant, who had never modeled before, said the tool “provided [the blind and low-vision community] with a new perspective on 3D modeling, demonstrating that we can indeed create relatively simple structures.”

Advertisement

Participants also reported that long text descriptions still make it hard to grasp complex shapes, and several said that without eventually touching a physical model or using a tactile display, it was difficult to fully “see” the design in their mind.

To evaluate the accuracy of the AI-generated descriptions, the research team recruited 15 sighted participants. “On a 1–5 scale, the descriptions earned average scores between about 4.1 and 5 for geometric accuracy, clarity, and avoiding hallucinations, suggesting the AI is reliable enough for everyday use.”

A failed all-at-once attempt to construct a 3-D helicopter shows incorrect shapes and placement of elements. In contrast, when the user journey allows for completion of each individual element before moving forward, results significantly improve. A new assistive program for blind and low-vision programmers, A11yShape, assists visually disabled programmers in verifying the design of their models.Source: Anhong Guo, Liang He, et al.

The feedback will help to inform future iterations—which He says could integrate tactile displays, real-time 3D printing, and more concise AI-generated audio descriptions.

Beyond its applications in the professional computer programming community, He noted that A11yShape also lowers the barrier to entry for blind and low-vision computer programming learners.

Advertisement

“People like being able to express themselves in creative ways. . . using technology such as 3D printing to make things for utility or entertainment,” says Stephanie Ludi, director of DiscoverABILITY Lab and professor of the department of computer science and engineering at the University of North Texas. “Persons who are blind and visually impaired share that interest, with A11yShape serving as a model to support accessibility in the maker community.”

The team presented A11yshape in October at the ASSETS conference in Denver.

From Your Site Articles

Related Articles Around the Web

Advertisement

Source link

Continue Reading

Tech

Applications are now open for the 2026 Swift Student Challenge — but hurry

Published

on

Aiming to encourage app development and celebrate the most creative participants, Apple’s Swift Student Challenge is back and the winners will get to visit Apple Park.

Colorful circular Swift logo showing an orange bird silhouette over blue programming code, with a vibrant gradient background of teal, purple, and pink on a light gray backdrop
Apple’s 2026 Swift Student Challenge is open for applications — image credit: Apple

As it has done now every year since 2020, Apple is running a Swift Student Challenge. Applications for the contest to find innovative new app developers are open now and close on February 28, 2026.
Applications are sought from students working with Swift Playground 4.6 or Xcode 26, or later. As well as only around two weeks to apply, there are many eligibility requirements.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Trending

Copyright © 2025