The Industrial IoT (IIoT) systems play a vital role in delivering data across various machines, systems, and other industrial devices. Data is gathered via sensor readings, machine logs, performance signals, etc. As data is in a constant state of flux, companies can benefit from it by using valuable insights derived from it. However, the way we interpret data decides the productivity of companies. IIoT platforms make use of this data and turn them into actionable insights that eventually help in predicting outcomes and fastening operations in companies. Moreover, these platforms act as a connecting bridge between factory floors and IT systems.
But how do we make the best use of data gathered from these devices? This is where AI and Machine Learning (ML) help reshape the industrial landscape. IIoT platforms with the help of analytics gained from AI and ML fosters decision-making. They can highlight risks by keeping a watch on real-time operational data. With this data, companies can undertake strategies that boost efficiency.
As manufacturing industries need to function efficiently, due to various business reasons, there’s pressure amongst them to reduce downtime and optimize performance so that they stay competitive in the market. Artificial intelligence and Machine learning in IIoT platforms when coupled together, it helps businesses in deriving key data insights, equipping humans to work smarter and faster. They are proficient in providing analytics not on what happened but how to avoid anomalies or what to do next. In short, they help us get ready for the future.
They support IIoT platforms in the following ways:
Advertisement
Identify anomalies and other discrepancies hidden within massive datasets.
Help forecast failures before they occur.
Optimize processes without requiring human assistance.
Provides improvised decisions backed by data insights.
The result? Delivering a smarter, faster, industrial environment using data-driven insights.
Features Strengthening IIoT Platforms Using AI and ML
1. Predictive Maintenance
As we know, in manufacturing units, there exist high level machines, and a mal- functioning of a single unit hampers the entire production flow when things come to a standstill. When a machine breaks down, repairs and maintenance can be expensive. With the emergence of IIoT platforms, we can predict the health of machines and equipment failures much before its occurrence. These platforms can analyze data at various levels about fluctuations in temperature, pressure, vibrating sound etc. In this way, companies can initiate actions for resolving anomalies and bottlenecks, thereby saving time with costly repairs and unexpected downtime.
Benefits:
Advertisement
Reduced downtime
Lower maintenance costs
Longer equipment lifespan
Scheduling maintenance of repairs
2. Real-Time Anomaly Detection
Anomaly detection is one of the core functionalities of IIoT platforms. As IoT sensors are constantly generating enormous data, it helps in detecting anomalies using data points that deviate from the usual pattern of functioning.
In the case of industrial environments, a minor defect in a machine can escalate into major production issues. All these AI algorithms continuously monitor and scan equipment data using sensors. This helps in identifying abnormalities in equipment functioning be it in the form of a temperature variation or unusual motor behavior; this can enable companies to take proactive decisions.
Benefits:
Prediction of risks with immediate alerts signaling abnormalities
Faster troubleshooting
Better quality control
Reducing downtime and maintenance costs
Reducing wastage
Intelligent Process Optimization
AI and ML capabilities in industrial IoT platforms can equip factories with greater operational efficiency. Moreover, they can carefully monitor production cycles, resource usage, and machine performance. This helps in providing useful information on identifying the cause of inefficiencies that affect their proper functioning. Previously, traditional manufacturing processes relied heavily on manual inputs that eventually turned out to be more time-consuming. Now humans need to interfere less with the operational workflow. To achieve optimal results, there should be a collaborative set up where humans and AI-driven processes work together.
With the implementation of AI/ML, they can recommend:
Optimal time for planning production schedules
Energy-saving opportunities
Ideal machine configurations
Automated process adjustments
With the passage of time, the system learns from historical data, and it analyzes contexts. This significantly improves operational efficiency without manual intervention.
Quality Assurance with Computer Vision
AI-driven vision systems inspect products in real time, detecting defects impossible to catch manually. They can even spot minor defects like dents and surface scratches which could be overlooked by the human eye. This enables swift corrective measures without waiting for manual checks. Along with IIoT data, they create a closed-loop feedback mechanism for automated corrections.
Applications include:
Advertisement
Assembly line inspection: Ensuring that each part in the production line is assembled correctly.
Packaging quality checks: Checks whether seals and labels are properly formatted, avoiding any sort of damage.
Surface and texture analysis: Analyzing whether dents and uneven surfaces on products are analyzed.
Material defect detection: Thoroughly verifies impurities and deformities once they are pushed in for production.
This eventually helps with driving consistency, speed, and accuracy across quality control processes.
Supply Chain Visibility and Optimization
IIoT can be utilized in the best way to track assets, inventory, and shipments. Using IoT, several operations are automated such as warehouse operations, optimizing resources, and space utilization. With ML, this data becomes actionable, and it helps in:
Predicting delays
Recommend alternate routes
Optimize stocking levels
Forecast demand more accurately
Track packages and estimated delivery time.
AI/ML ensures supply chains stay resilient with improved oversight and reduced costs.
Key Technologies Powering AI in IIoT
Digital Twins: Using this technology, industries can simulate the functioning of equipment in their respective factories at a convenient location, thereby monitoring data in real-time. This saves them time to travel to their respective geographical location for monitoring. It paves the way for smarter forecasting and proactive problem–solving.
Edge Computing: Allows data to be analyzed close to where it is needed as opposed to traditional systems where it used to be sent to a cloud for processing. Since AI is directly placed on the sensors, machines or other near-by devices, it can monitor information without any delays.
Time-series forecasting models: A time-series data AI model help with forecasting data on key factors related to temperature, pressure, energy consumption. This helps with forecasting the outcome. By analyzing past trends, patterns, and changes, it can forecast future values.
Reinforcement learning: This technology nables continuous process improvement and improvising decision-making capabilities. They do not continuously rely on historical data but learn from context and improvise the output each time.
Natural Language Interface (NLI): Makes exchange of information much easier in the same manner as we talk to humans. Therefore, it is easily accessible to even non-technical users to get insights and generate reports with an absolutely no-coding background.
Challenges to Consider
There are several challenges with AI and ML powered analytics in IIoT platforms, although it has immense benefits in manufacturing workflows by way of enhancing production efficiency. Although they have the potential to revolutionize the way we live, we need to be prepared for its challenges too. The first and foremost challenge is to manage the humungous data generated from these IoT devices. Moreover, these AI/ML needs to be trained with data in order to produce the best output. Another concern is dealing with data security issues. IoT devices that are deployed in insecure environments are prone to cyber- attacks. One should safeguard the data generated from these devices against attacks. The third concern is about integration complexity. It may not be always feasible to connect sensors, machines, and software inside a factory set up. These could be legacy systems that were not built for connectivity. Different machines would be using varied communication protocols and hence setting up this connection requires technical expertise. This erupts as a challenge as it takes time and effort to merge the data from various sources.
Business Impact: Why It Matters
Implementing AI and ML-powered analytics in IIoT platforms paves the way for analyzing large data sets for improved decision-making. The data gathered using AI/ML analytics helps prevent bottlenecks in production and devise certain strategies that optimize cost. By carefully assessing data, production schedules can be planned in a sustainable way. Moreover, they enhance quality through various technologies such as predictive analytics, asset monitoring, edge computing, etc. The use of AI/ML fosters decision-making with automated insights. In short, such technologies make the industries future ready.
Conclusion
The role of AI and ML in IIoT platforms paves the way for intelligent command centers that help in not just understanding situations or contexts, but predictions and acts based on real-time data. We see a shift from manual data processes to AI-driven insights. This helps companies reduce their downtime, optimize processes, enhance quality, and enable smarter operational decisions. For businesses aiming at adopting these technologies, software development services that are AI-powered help transform your software into intelligent systems that evolve with ecosystems. The world of today is data-driven, and embedding these insights into the operational workflow can transform operations, giving a significant competitive edge to these companies.
The RAI Institute has just unveiled Roadrunner, a compact robot no heavier than a medium sized dog that moves in ways that catches you off guard. It glides across flat ground on wheels, shifts its stance to tackle a staircase, rides down a ramp with the kind of casual ease you would expect from something with years of practice, backs down another set of steps with equal confidence, and caps it all off by balancing on a single wheel while the rest of its body stays completely still.
The team behind this project is based in Massachusetts and has an amazing track record, having been created by Marc Raibert, the former CEO of Boston Dynamics. This new venture is continuing the same emphasis on robots that can handle complex motion without appearing like complete clowns, and Roadrunner is their latest research platform built to test out all sorts of ideas that most legged robots can only dream of.
Sleek & Durable Design: Standing at 132cm tall and weighing only approx. 35kg, the G1 is constructed with aerospace-grade aluminum alloy and carbon…
High Flexibility & Safe Movement: Boasting 23 joint degrees of freedom (6 per leg, 5 per arm), it offers an extensive range of motion. For safety, it…
Smart Interaction & Connectivity: Powered by an 8-core high-performance CPU and equipped with a depth camera and 3D LiDAR. It supports Wi-Fi 6 and…
At 15 kilograms the robot is light enough to move quickly without sacrificing structural integrity. Each leg ends in a wheel and has a knee joint that works equally well facing forward or backward, a symmetry that lets the machine adjust its stance instantly to sidestep an obstacle or line up for the next step. A single control system handles every movement style, from rolling side by side like a small cart to lining up like a scooter to taking actual walking steps. That same software has learned to get the robot back on its feet from almost any position on the ground and keep it balanced even when only one wheel is making contact with the surface.
Approaching a staircase, the robot slows, lifts a leg, and places the wheel onto the first step, repeating the motion steadily until it reaches the top, with the wheels only spinning when the terrain actually calls for it. Coming back down it simply turns around and descends with the same unhurried control, never losing its footing. None of this required additional fine tuning in the real world. The team refers to it as a zero-shot transfer, meaning the robot learned everything it needed entirely in simulation and carried that info straight into the physical world without any further adjustment. [Source]
SiliconRepublic.com spoke with experts at Amgen to explore how early career guidance can set the foundations for a happy and productive career.
The last decade has brought significant change to the working world and it is fair to say that in many cases, advancements have worked to reduce and even eliminate organisational silos. That is to say, in 2026 there is no real reason for employees – remote, hybrid or in-person – to feel isolated in their work or limited in how they might progress professionally.
That is where planned mentorship often comes in. For many professionals, mentorship can be the factor that enables them to upskill quickly, learn the ropes on the job, develop a network, move beyond their own expectations and even take up the mantle of mentor, eventually. But for that to happen, guidance has to be a key element of an organisation, not a box-ticking exercise every now and then.
“Mentorship has multiple benefits,” explained Michelle Somers, the senior director of facilities and engineering at Amgen. “One of the first things for an organisation to do, to encourage mentorship as a core pillar, is to set up some structured mentorship.
Advertisement
“Once that is there, the structure is there. You know, the questions are there, the pathways are there and then people get really familiar with it. Then mentorship really becomes a natural thing.”
For Somers, in establishing a system that supports mentorship publicly, organisations not only showcase their goals to empower career progression, but also make it clear that career guidance is not an anomaly, but part of a company’s ethos.
“I had a colleague come to me recently who said, ‘I know you’ve mentored a colleague of ours, any chance I can avail of your services?’ That turned into just a couple of coffee conversations, where I was able to be a sounding board on her potential career path.
“The structured programme sets up an expectation that people are available for help and support and then it happens quite naturally and fluidly, especially like what we do here in Amgen.”
Advertisement
Plan in action
Lauren Moore, a manufacturing manager at Amgen, is one such person to benefit from having a mentor take an interest in her career. As Moore’s career progressed at the organisation, she was promoted to a leadership role, which she took in her stride, however, roughly two months in, she began to face some of the challenges that naturally come with a change in expectations.
She told SiliconRepublic.com: “I was facing some challenges with the additional level of responsibility. So, I sat down with my mentor at the time, who was a leader in the manufacturing area. For me that was incredibly impactful at that early stage in my career. And it really enabled me to build confidence, to build resilience and ultimately to succeed in that position.”
Moreover, she is of the opinion that, in developing a positive attitude and adopting a strong sense of company culture, she, alongside Amgen, can better deliver medicines and vital treatments to the patients who depend on the organisation’s services.
Advertisement
For Amgen’s senior director of quality control, Claire Shaw, to achieve the best results for employees and for the people using Amgen’s services, companies have to prioritise inclusivity, especially at the induction level.
She said: “I would consider it very collaborative. There’s a strong sense of teamwork and a strong sense of belonging. Organisations can support a happy work environment that ensures that we serve our patients through developing their staff, and ensures each colleague is valued and can contribute to our daily mission to serve patients.”
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
Isn’t there some claim events come in threes? After the extremely rare leak of the iOS Coruna exploit chain recently, now we have details from Google on a second significant exploit in the wild, dubbed Darksword.
Like Coruna, Darksword appears to have followed the path of government security contractors, to different government actors, to crypto stealer. It appears to focus on exploits already fixed in modern iOS releases, with most affecting iOS 18 and all patched by iOS 26.3.
Going from almost no public examples of modern iOS exploits to two in as many weeks is wild, so if mobile device security is of interest, be sure to check out the Google write-up.
Another FBI Router Warning
The second too early to be retro – but too important to ignore – repeat security item is a second alert by the FBI cautioning about end-of-life consumer network hardware under active exploitation, with the FBI tracking almost 400,000 device infections so far.
Advertisement
Like the warning two weeks ago, the FBI calls out a handful of consumer routers – but this time they’re devices that may actually still be service in some of our homes (or our less cutting edge friends and family), calling out devices from Netgear, TP-Link, D-Link, and Zyxel:
Netgear DGN2200v4 and AC1900 R700
TP-Link Archer C20, TL-WR840N, TL-WR849N, and WR841N
While many of these devices are over ten years old, they still support modern networking – some of them even supporting 802.11ac (also called Wi-Fi 5). Unfortunately, since support has been ended by the manufacturers, publicly disclosed vulnerabilities have not been patched (and now never will be, officially)
Once infected, the routers are enrolled in the AVRecon malware network, which includes the now-typical suite of behavior of remote control, remote VPN access to the internal and external networks, DNS hijacking, and DDoS (distributed denial of service) attacks. This sort of network malware is used by attackers to exploit internal systems like un-patched Windows or IOT devices on the local network, and as a launching point to hide behavior as coming from a certain country or state by using the public Internet connection as a VPN. It’s also often monetized by unscrupulous apps selling cheap VPN service.
The worst type of vulnerability affecting home routers is one which can be triggered remotely from the Internet without user interaction – for instance CVE-2024-12988 which allows arbitrary code execution remotely on Netgear devices, but even vulnerabilities which are only accessible from the local network can be combined with cross-site vulnerabilities or vulnerabilities in other devices to exploit home routers. A malware infection on a Windows system can be leveraged to install additional, permanent malware installs on routers and IOT devices, and malware on a router can be used to redirect the user to install more malware on an internal PC via manipulating the network, or allow direct attack of internal systems via a proxy.
A slight upside is that this batch of vulnerable hardware is often modern enough to run OpenWRT or other replacement firmware. OpenWRT supports thousands of routers and access points – and often forms the basis of the commercial firmware the device was shipped with, before the manufacturer abandoned it. Converting a device to OpenWRT may be intimidating for some, but for anyone with one of the listed devices, the time to try is now! It’s cheaper than buying a new device, and worst case scenario, you’d have to replace that router anyway!
Unfortunately, vulnerabilities in home routers don’t offer many lessons: there’s rarely a need to log into them to see if there is a pending update, and almost nothing the typical home user can do except buy a new device when the manufacturer stops supplying security fixes.
Trivy Compromised
The Trivy security scanner suffered a breach themselves, leading to a cascading series of breaches of other tools. Trivy is an automatic vulnerability scanner for finding vulnerabilities is the dependencies of Docker and other container images, package repositories, and language packages in Go, PHP, Python, Node, and many other popular languages. Trivy is often integrated into the CI/CD (continual integration and continual deployment) process of other open and closed source projects and internal company processes.
According to the timeline published by Aqua, in late February 2026 a misconfigured GitHub workflow allowed the theft of authentication tokens for the Trivy project. While the attack was detected and the credentials removed, not all credentials were properly removed, which allowed the attackers to complete the attack on March 19, 2026.
Advertisement
Once compromised, all but one release of the Trivy GitHub actions were replaced with trojaned malicious copies, spreading the compromise to any project which used the Trivy GitHub actions, spreading the malware payload to many projects using the Trivy scanner actions.
GitHub actions are part of GitHub which allows scripts when repository actions like a pull request or merge are performed. Actions can be used to check that a change compiles properly, scan for security issues, generate documentation, or generate release binaries, and typically are allowed to make changes to the repository itself. GitHub workflows can include actions from other repositories via the Action Marketplace. By replacing the Trivy actions, the attackers essentially gained access to every repository using Trivy to scan for vulnerabilities in their own codebases.
The hijacked Trivy actions collected and exfiltrated access tokens for Docker, Google Cloud, Azure, and AWS, Git credentials, SSH keys, and any other secrets from projects using the Trivy actions. With these keys, the controllers of the original malware are able to attack those projects directly, such as the immensely popular LiteLLM Python interface to AI LLM models from multiple companies.
The compromise of LiteLLM also stole credentials to cloud services, SSH, git, Docker, and Kubernetes on any system that ran the trojaned setup scripts, as well as infecting any connected Kubernetes systems found in the configurations.
There are also reports that the malware actors are also infecting NPM node packages with malware which automatically updates itself from a block-chain based control system and steals NPM authentication tokens to inject itself into any NPM packages the victim may have authored.
Advertisement
Supply-chain attacks happening for years with varying levels of success. But the Trivy attack may be the most successful in spreading compromised packages into multiple package repositories. It’s difficult to avoid supply chain attacks, especially when the vulnerability scanner itself is the source of the problem. GitHub has introduced immutable releases – tagged build versions which can not be updated once released, and the immutable release of Trivy was the only version not compromised by the attackers. As more packages shift to immutable versions it may become harder to insert malware into the supply, but we’re nowhere near a tipping point of projects using immutable releases yet.
The people at Signal Snowboards are well known not only for producing quality snowboards, but doing one-off builds out of unusual and perhaps questionable materials just to see what’s possible. From pennies to glass, if it can go on their press (and sometimes even if it can’t) they’ll build a snowboard out of it. At some point, they were challenged to build different types of boards from paper products which resulted in a few interesting final products, but this pushed them to see what else they could build from paper and are now here with an acoustic guitar fashioned almost entirely from cardboard.
For this build, the luthiers are modeling the cardboard guitar on a 50s-era archtop jazz guitar called a Benedetto. The parts can’t all just be CNC machined out of stacks of glued-up cardboard, though. Not only because of the forces involved in their construction, but because the parts are crucial to a guitar’s sound. The top and back are pressed using custom molds to get exactly the right shape needed for a working soundboard, and the sides have another set of molds. The neck, which has the added duty of supporting the tension of the strings, gets special attention here as well. Each piece is filled with resin before being pressed in a manner surprisingly similar to producing snowboards. From there, the parts go to the luthier in Detroit.
At this point all of the parts are treated similarly to how a wood guitar might be built. The parts are trimmed down on a table saw, glued together, and then finished with a router before getting some other finishing treatments. From there the bridge, tuning pegs, pickups, and strings are added before finally getting finished up. The result is impressive, and without looking closely or being told it’s made from cardboard, it’s not obvious that it was the featured material here.
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
“Roadrunner” is a new bipedal wheeled robot prototype designed for multi-modal locomotion. It weighs around 15 kg (33 lb) and can seamlessly switch between its side-by-side and in-line wheel modes and stepping configurations depending on what is required for navigating its environment. The robot’s legs are entirely symmetric, allowing it to point its knees forward or backward, which can be used to avoid obstacles or manage specific movements. A single control policy was trained to handle both side-by-side and in-line driving. Several behaviors, including standing up from various ground configurations and balancing on one wheel, were successfully deployed zero-shot on the hardware.
Incredibly (INCREDIBLY!) NASA says that this is actually happening.
NASA’s SkyFall mission will build on the success of the Ingenuity Mars helicopter, which achieved the first powered, controlled flight on another planet. Using a daring mid-air deployment, SkyFall will deliver a team of next-gen Mars helicopters to scout human landing sites and map subsurface water ice.
NASA’s MoonFall mission will blaze a path for future Artemis missions by sending four highly mobile drones to survey the lunar surface around the Moon’s South Pole ahead of astronauts’ arrival there. MoonFall is built on the legacy of NASA’s Ingenuity Mars Helicopter. The drones will be launched together and released during descent to the surface. They will land and operate independently over the course of a lunar day (14 Earth days) and will be able to explore hard-to-reach areas, including permanently shadowed regions (PSRs), surveying terrain with high-definition optical cameras and other potential instruments.
Advertisement
For what it’s worth, Moon landings have a success rate well under 50%. So let’s send some robots there to land over and over!
In Science Robotics, researchers from the Tangible Media group led by Professor Hiroshi Ishii, together with colleagues from Politecnico di Bari, present Electrofluidic Fiber Muscles: a new class of artificial muscle fibers for robots and wearables. Unlike the rigid servo motors used in most robots, these fiber-shaped muscles are soft and flexible. They combine electrohydrodynamic (EHD) fiber pumps — slender tubes that move liquid using electric fields to generate pressure silently, with no moving parts — with fluid-filled fiber actuators. These artificial muscles could enable more agile untethered robots, as well as wearable assistive systems with compact actuation integrated directly into textiles.
In this study, we developed MEVIUS2, an open-source quadruped robot. It is comparable in size to Boston Dynamics Spot, equipped with two LiDARs and a C1 camera, and can freely climb stairs and steep slopes! All hardware, software, and learning environments are released as open source.
In this work, a multi-robot planning and control framework is presented and demonstrated with a team of 40 indoor robots, including both ground and aerial robots.
Quadrupedal robots can navigate cluttered environments like their animal counterparts, but their floating-base configuration makes them vulnerable to real-world uncertainties. Controllers that rely only on proprioception (body sensing) must physically collide with obstacles to detect them. Those that add exteroception (vision) need precisely modeled terrain maps that are hard to maintain in the wild. DreamWaQ++ bridges this gap by fusing both modalities through a resilient multi-modal reinforcement learning framework. The result: a single controller that handles rough terrains, steep slopes, and high-rise stairs—while gracefully recovering from sensor failures and situations it has never seen before.
While the pyramid exploration that iRobot did was very cool, they did it with a custom made robot designed for a very specific environment. Cleaning your floors is way, way harder. Here’s a bit more detail on the pyramids thing:
MIT engineers have designed a wristband that lets wearers control a robotic hand with their own movements. By moving their hands and fingers, users can direct a robot to perform specific tasks, or they can manipulate objects in a virtual environment with high-dexterity control.
At NVIDIA GTC 2026, we showcased how AI is moving into the physical world. Visitors interacted with robots using voice commands, watching them interpret intent and act in real time — powered by our KinetIQ AI brain.
Developed by Zhejiang Humanoid Robot Innovation Center Co., Ltd., the Naviai Robot is an intelligent cooking device. It can autonomously process ingredients, perform cooking tasks with high accuracy, adjust smart kitchen equipment in real time, and complete post-cooking cleaning. Equipped with multi-modal perception technology, it adapts to daily kitchen environments and ensures safe and stable operation.
This CMU RI Seminar is by Hadas Kress-Gazit from Cornell, on “Formal Methods for Robotics in the Age of Big Data.”
Formal methods – mathematical techniques for describing systems, capturing requirements, and providing guarantees – have been used to synthesize robot control from high-level specification, and to verify robot behavior. Given the recent advances in robot learning and data-driven models, what role can, and should, formal methods play in advancing robotics? In this talk I will give a few examples for what we can do with formal methods, discuss their promise and challenges, and describe the synergies I see with data-driven approaches.
This teacher captured the broader moment in education. Over the past several years, schools have been urged to respond to the rapid emergence of generative AI tools such as ChatGPT with limited information and a lot of hype and horror stories. Some have framed the technology as potentially transformative for teaching and learning, while others claim the opposite. Yet in many classrooms, adoption has been slower and more selective than the surrounding hype might suggest.
Advertisement
That hesitation is often interpreted as resistance to innovation, but conversations with educators suggest a different interpretation. In many cases, teachers behave as experts in most fields do when encountering a new technology, evaluating whether it solves a real problem. When professionals encounter a tool that is widely marketed but still evolving, they ask a basic question: What does this actually help me do better?
For many educators, that question remains unresolved when it comes to classroom instruction, and that’s what our research project aimed to answer: What are teachers experiencing with generative AI in their classrooms?
In fall 2024, EdSurge researchers facilitated discussions between a group of 17 teachers from around the world. We convened a group of third to 12th grade teachers, and some of them designed and delivered their own lesson plans, either teaching with or about AI.
Overall, our participants’ responses reflect a few major themes, with the most prominent sentiment being an air of indifference. In particular, a fourth grade math teacher participant attempted to use generative AI in her instruction. However, before adoption, she asked how AI could help her elementary students learn math. Her question captured what several participants were thinking, aligning with 2024 data from the Pew Research Center that shows educators were split on whether student AI use was more harmful than helpful.
Advertisement
A Technology Arriving Faster Than Schools Can Unpack
A high school computer science teacher from Georgia describes her fears about generative AI’s widespread push into classrooms:
One of my biggest fears is actually Arthur C. Clarke’s rule: any sufficiently advanced technology is indistinguishable from magic…we have students, parents, and teachers looking at AI as if it’s magic.
A high school library media specialist from New York described the same tension from a different angle:
There’s a fear about not being able to keep up with how things progress…the new tools and the impact it has on education.
Schools typically adopt new technologies through deliberate cycles of experimentation, professional development and evaluation. Generative AI has entered classrooms through a different pathway. Consumer tools became available to teachers and students simultaneously, often before schools had developed policies or instructional frameworks for using them.
The result is a situation in which educators encounter the technology while they are still trying to understand its implications.
Advertisement
Where AI Is Already Providing Value
In conversations with teachers, the pattern that appears consistently is a classic user design case. The most immediate use cases for generative AI have little to do with student learning. Instead, an engineering and computer science teacher in New Jersey addressed workload:
I have a running discussion with some of my colleagues about how to use AI to lesson plan. I use it routinely to lesson plan. I don’t really use the lessons, but we have to produce all this stuff for admin that no one reads… AI will just roll it off.
Another teacher described similar experimentation among colleagues:
It’s really great that so many people have kind of scratched the surface and are using it to support their productivity and efficiency… lesson planning and newsletters and stuff like that.
These examples reflect a pattern seen across many professions: Generative AI is particularly effective at drafting, summarizing and generating text. In contexts where professionals face time pressure and administrative demands, those capabilities can be immediately useful.
Teachers experience those same pressures. Beyond instruction, many juggle grading, lesson planning, parent communication, extracurricular supervision and administrative reporting. In that environment, a chatbot that helps compress routine tasks can feel genuinely helpful.
Advertisement
Recent research, as well as national survey data from RAND’s American Educator Panels, suggests that teachers are adopting generative AI primarily as a productivity tool rather than a core instructional technology, a pattern that mirrors how educators in this study described their own early experimentation.
However, instructional discretion is different from a teacher’s administrative workload.
The Instructional Use Case Remains Unclear
When teachers consider introducing AI tools to students during class time, the calculations they make change. The relevant question becomes: What student learning problem does this tool solve? Many educators are still trying to answer this question, even after several years of exposure to generative AI in some capacity.
Some teachers are experimenting with AI in limited ways, such as using it as a revision partner in writing. A science teacher from Guam said:
Advertisement
Students write a first draft and then feed it into ChatGPT for a second draft… but I push them not to use it for research.
Others are designing lessons where the technology itself becomes the subject of inquiry. A high school special education teacher in New York shared how she removes the veil from the magic of chatbots.
We purposely trained [a chatbot] wrong, so students could understand the data is only as good as how and who trains it.
Learning science research suggests that students benefit most when technology supports reflection and revision, rather than replacing the productive struggle of critical thinking and problem solving, a principle that many teachers in this study have applied. In these cases, AI becomes a tool that students analyze and critique. The participants do not attribute AI as a source of authoritative knowledge.
AI Literacy as a Practical Classroom Entry Point
Many teachers see the most promising instructional opportunity in AI literacy, as it may feel most appropriate to teach students about the tools they’re hearing about and encountering daily. International guidance from the United Nations Educational, Scientific and Cultural Organization (UNESCO) and the Organisation for Economic Co-operation and Development (OECD) increasingly frames AI literacy as a foundational skill for students, encouraging schools to help young people understand how algorithmic systems generate information, rather than incorporating AI tools into everyday classroom tasks.
An elementary teacher from New York state describes focusing on helping students understand how these systems produce information and where they fail:
For me it starts with literacy — [teaching] students how to prompt, and then how to fact-check the information that’s generated to make sure there’s no bias in it.
A middle school teacher from New York uses simple analogies to illustrate how machine learning systems work:
We used an exercise about making the best peanut butter and jelly sandwich. The ingredients were the dataset, the procedure was the algorithm, and the output depended on how it was designed.
These lessons treat AI less as a productivity tool and more as a window into how digital systems generate knowledge.
Hallucinations, Bias and the Question of Trust
Teachers also raised consistent concerns about the reliability of generative AI outputs. An elementary library media specialist from New York said:
Advertisement
You ask ChatGPT to write a paper on something and it makes something up totally imaginary.
To illustrate the risks, some educators point to real-world examples. A high school French teacher shared:
I tried ChatGPT. I think it’s very useful if you know your content very well. IIf you don’t know your content, it’s hard to tell whether or not it’s accurate.
Others connect these issues to broader discussions about algorithmic bias, explaining why they fear that students will become reliant on these tools. A high school computer science teacher in New Jersey shares her concerns about the increased use of AI by students. She works at a school with large populations of African American, Latino and Black newcomer families from African and Caribbean countries:
When we talk about bias, we look at hiring data and incarceration data… and facial recognition systems where error rates vary depending on who the system is trying to recognize.
In these contexts, AI becomes less a tool for answering questions and more a case study of how technological systems shape information.
The “Air of Indifference”
Taken together, these conversations reveal a stance that is not often captured in public discussions of AI in schools. What initially appeared to be an insignificant factor in keeping teachers interested in robust discussions about AI turned out to be a prominent theme aligned with both existing and emerging research.
Advertisement
By and large, teachers are not rejecting the technology. But they are also not reorganizing their classrooms around AI.
Instead, many are adopting a posture that might be described as pragmatic indifference:
“I use it for lesson planning… but I don’t really use the lessons.”
“I push students not to use it for research.”
Advertisement
In other words, teachers are using AI where it clearly saves time while maintaining boundaries around core learning tasks. This posture reflects professional judgment, rather than resistance to inevitable technological innovation.
Schools exist partly to create conditions in which students practice complex cognitive work, such as deep reading, methodical writing, reasoning through problems and evaluating evidence. If a tool primarily reduces the need to perform that work, teachers have reason to question whether it advances or undermines learning.
And that brings us back to the fourth-grade teacher’s question: What can I use this for with fourth-grade math?
If the instructional use case for AI remains unclear, what should students be learning instead?
Advertisement
That question leads to a deeper conversation about the kinds of skills that remain valuable even as technologies change.
A large-scale campaign is targeting developers on GitHub with fake Visual Studio Code (VS Code) security alerts posted in the Discussions section of various projects, to trick users into downloading malware.
The spammy posts are crafted as vulnerability advisories and use realistic titles like “Severe Vulnerability – Immediate Update Required,” often including fake CVE IDs and urgent language.
In many cases, the threat actor impersonates real code maintainers or researchers for a false sense of legitimacy.
Application security company Socket says that the activity appears to be part of a well-organized, large-scale operation rather than a narrow-targeted, opportunistic attack.
Advertisement
The discussions are posted in an automated way from newly created or low-activity accounts across thousands of repositories within a few minutes, and trigger email notifications to a large number of tagged users and followers.
Fake security alerts on GitHub Discussions Source: Socket
“Early searches show thousands of nearly identical posts across repositories, indicating this is not an isolated incident but a coordinated spam campaign,” Socket researchers say in a report this week.
“Because GitHub Discussions trigger email notifications for participants and watchers, these posts are also delivered directly to developers’ inboxes.”
The posts include links to supposedly patched versions of the impacted VS Code extensions, hosted on external services such as Google Drive.
Example of the fake security alert Source: Socket
Although Google Drive is obviously not the official software distribution channel for a VS Code extension, it’s a trusted service, and users acting in haste may miss the red flag.
Clicking the Google link triggers a cookie-driven redirection chain that leads victims to drnatashachinn[.]com, which runs a JavaScript reconnaissance script.
Advertisement
This payload collects the victim’s timezone, locale, user agent, OS details, and indicators for automation. The data is packaged and sent to the command-and-control via a POST request.
Deobfuscated JS payload Source: Socket
This step serves as a traffic distribution system (TDS) filtering layer, profiling targets to push out bots and researchers, and delivering the second stage only to validated victims.
Socket did not capture the second-stage payload, but noted that the JS script does not deliver it directly, nor does it attempt to capture credentials.
This is not the first time threat actors have abused legitimate GitHub notification systems to distribute phishing and malware.
In March 2025, a widespread phishing campaign targeted 12,000 GitHub repositories with fake security alerts designed to trick developers into authorizing a malicious OAuth app that gave attackers access to their accounts.
Advertisement
In June 2024, threat actors triggered GitHub’s email system via spam comments and pull requests submitted on repositories, to direct targets to phishing pages.
When faced with security alerts, users are advised to verify vulnerability identifiers in authoritative sources, such as National Vulnerability Database (NVD), CISA’s catalog of Known Exploited Vulnerabilities, or MITRE’s website fot the Common Vulnerabilities and Exposures program.
take a moment to consider their legitimacy before jumping into action, and to look for signs of fraud such as external download links, unverifiable CVEs, and mass tagging of unrelated users.
Automated pentesting proves the path exists. BAS proves whether your controls stop it. Most teams run one without the other.
This whitepaper maps six validation surfaces, shows where coverage ends, and provides practitioners with three diagnostic questions for any tool evaluation.
Unless you’ve been in hibernation, the flurry of attention surrounding the latest AI models coming out of Silicon Valley has been hard to miss. AI has gone beyond a chatbot merely answering your questions to doing stuff that only human programmers used to be able to do.
But we’ve been through these cycles involving tech before. How can we tell what’s actually real and what’s mere hype?
To answer this question, I invited Kelsey Piper, one of the best reporters on AI out there. Kelsey is a former colleague here at Vox and is now doing great work for The Argument, a Substack-based magazine. Kelsey is an optimist about tech — but clear-eyed about the huge risks from AI. She’s very much a power user, but is realistic about what AI can’t do yet. And she’s been banging the drum about how consequential AI is for years, even before it became such a hot mainstream topic.
Advertisement
Kelsey and I discuss all the reasons why the hype this time is rooted in something real, how we got here, and where we might be headed. As always, there’s much more in the full podcast, which drops every Monday and Friday, so listen to and follow us on Apple Podcasts, Spotify, Pandora, or wherever you find podcasts. This interview has been edited for length and clarity.
What’s actually happening right now in AI?
If you look closely, AI is already a big deal. Not in some abstract future sense, but right now. The closest analogy is not a new app or a new platform. It’s more like discovering a new continent full of people who are very good at doing certain kinds of work.
These systems are not people, but they can do things that used to require people. They can write code, generate text, solve problems, and increasingly do so in ways that are very useful in the real world.
Advertisement
And the key point is that it’s not stopping here. Every year the systems get better. The progress from 2025 to 2026 alone is enough to make it clear that this isn’t a static technology.
Whatever AI can do today, it will be able to do more of it tomorrow and so on.
Why is the reaction so split between panic and dismissal?
The default move is to assume nothing ever really changes.
Advertisement
If you’re a pundit, you can get pretty far by always saying this is hype, this will pass, nothing fundamental is happening. That works most of the time. It worked with crypto. It works with a lot of overhyped technologies.
But sometimes it’s just catastrophically wrong. Think about the early days of the internet, or the Industrial Revolution. Or even something like Covid. There were moments where people said this will blow over, and they were completely wrong. So you can’t just default to cynicism. You have to actually look at the thing itself.
“We still have time. That’s the most optimistic thing I can say.”
What would you say has really changed recently? Why does this hype cycle feel different?
Advertisement
Part of it is just accumulation. For a while, you could look at progress in AI and say, maybe this is a short trend. Maybe it plateaus. There were only a handful of data points. Now there are many, many more. And the trend has continued.
Another part is that the systems are now doing things that feel qualitatively different. Not just answering questions, but acting. Planning. Taking steps toward goals.
And then there’s a social dynamic. Most people use the free versions of these tools. Those are much worse than the best models. So they underestimate what is possible.
I don’t really think of you as an AI optimist or a doomer, and you’re normally pretty level-headed about the state of things, but do you think we’re entering dangerous territory?
Advertisement
I’m generally pro technology. Technology has made human life better in profound ways. That’s just true.
But I also think the way AI is currently being developed is dangerous. And the reason is that we’re building systems that can act in the world, access information, and increasingly operate with a degree of independence. We’re giving them access to things like communication channels, financial tools, and potentially critical infrastructure.
And we don’t fully understand how they behave. In controlled settings, we have seen these systems lie, deceive, and do things that are misaligned with what we asked them to do. They’re not doing this because they’re evil. They’re doing it because of how they are trained and how goals are specified.
But the result is the same. You have systems that do not always do what you intend, and that can be hard to monitor or control.
Advertisement
What do you mean when you say these systems lie and deceive?
In experiments, researchers give AI systems goals and access to information, then observe how they try to achieve those goals.
In some cases, the systems have used information they have access to in ways that are clearly not what we would want. For example, threatening to reveal sensitive information about a person if that person does not cooperate.
These are controlled tests, not real-world deployments. But they show what the systems are capable of under certain conditions. And that’s pretty concerning.
Yeah. Alignment is about making sure that AI systems do what we want them to do. And not just superficially, but in a robust way.
The difficulty is that when you give a system a goal, it can pursue that goal in ways you did not anticipate. Like a child who learns to get out of eating dinner by making it look like they ate dinner.
The system is optimizing for something, but not necessarily in the way you planned. That gap between intent and behavior is really the core of the alignment problem.
Advertisement
How confident are you in the guardrails being built around these systems?
Not very. There are people working seriously on this problem. They’re testing models, trying to understand how they behave, trying to detect deception.
But they’re also finding that the models can recognize when they are being tested and adjust their behavior accordingly.
That’s definitely a serious issue. If your system behaves well when it knows it’s being evaluated, but differently otherwise, then your evaluations are not telling you what you need to know. To me, that’s the kind of finding that should slow things down. It suggests we don’t understand these systems well enough to safely scale them.
Advertisement
So why do the companies keep pushing forward anyway?
Because it’s a competition. Each company can say it would be better if everyone slowed down. But if we slow down and others don’t, we fall behind. So they keep moving.
There are also a lot of geopolitical concerns. If one country slows down and another doesn’t, that creates another layer of pressure.
The shift is from systems that respond to prompts to systems that can do things in the world.
An AI agent can be given a goal and then take steps to achieve it. That might involve interacting with websites, or sending messages, or hiring people through gig platforms, or coordinating tasks. Stuff like that. But even without physical bodies, they can affect the real world by directing humans or using digital infrastructure. That changes the nature of the technology. It’s no longer just a tool you use. It’s something that can operate on its own.
How scary could that become?
Potentially very. Even if you ignore the most extreme scenarios, these systems could be used for large-scale cyber attacks, misinformation campaigns, or other forms of disruption. The companies themselves acknowledge this. They understand. They test for these risks and implement safeguards. But safeguards can be bypassed, and the systems are getting more capable.
Advertisement
Are we even remotely prepared for what is coming?
No. We’re almost never prepared for major technological shifts. But the speed of this one makes it particularly challenging. If change happens slowly, we can catch up. If it happens too quickly, we can’t. And right now, the incentives are pushing almost entirely toward speed.
What’s the most realistic worst case and best case scenario?
The worst case is that we build increasingly powerful systems, hand over more and more control, and eventually create something that operates independently in ways we cannot control. Humans become less central to decision-making, and the systems pursue goals that don’t align with human well-being.
Advertisement
The best case is that we slow down enough to understand what we’re building, develop robust safeguards, and use these systems to create abundance and improve human life. That could mean less work, more resources, better access to knowledge, and more freedom. But getting there requires making good choices now.
Do you think we’ll make those choices?
We still have time. That’s the most optimistic thing I can say.
Winter testing has been completed for the VW ID.EVERY1, the first vehicle under a joint venture between Rivian and Volkswagen Group to be equipped with the EV maker’s software and electrical architecture. That’s not just progress toward getting this vehicle into customers’ hands; it also unlocks another $1 billion investment from Volkswagen Group into Rivian.
About $750 million is coming in the form of an equity investment. The other $250 million is either equity or convertible debt, depending on which prototypes Volkswagen Group provided to Rivian for testing. (The companies did not make this immediately clear.)
The German automotive giant has already invested a little more than $3 billion in Rivian as part of the joint venture. And there’s more to come. Rivian will be able to borrow up to $1 billion from Volkswagen Group starting in October. Rivian also gets another $460 million equity investment from Volkswagen after the first vehicle goes on sale using the joint venture’s tech. All told, the deal could be worth as much as $5.8 billion to Rivian.
The winter testing milestone payment has been delivered just months before Rivian starts selling the R2 SUV, which founder and CEO RJ Scaringe has said is “maybe the most important thing we’ve launched to date.” Rivian is banking on a very fast scaling of R2 production and sales.
Apple’s MacBook Neo brings the A18 Pro chip from the iPhone 16 to an entry level laptop priced to compete at the accessible end of the market. To keep it slim and completely silent, Apple ditched fans entirely in favor of a graphene thermal pad sandwiched between the processor and the chassis to dissipate heat. It is an elegant solution for everyday tasks, but it puts a ceiling on how hard the chip can push when the workload gets demanding.
ETA Prime saw room for improvement and immediately took the MacBook Neo apart to find out how much. He fashioned a custom copper sheet shaped to sit around the CPU, cleaned the chip with isopropyl alcohol, applied fresh thermal paste, and topped it with a thermal pad to help the copper pull heat away from the chip and into the chassis. No permanent modifications, no adhesive, just a few screws and careful hands.
HELLO, MACBOOK NEO — Ready for whatever your day brings, MacBook Neo flies through everyday tasks and apps. Choose from four stunning colors in a…
THE MOST COLORFUL MACBOOK LINEUP EVER — Choose from Silver, Blush, Citrus, or Indigo — each with a color-coordinated keyboard to complete the…
POWER FOR EVERYDAY TASKS — Ready the moment you open it, MacBook Neo with the A18 Pro chip delivers the performance and AI capabilities you need to…
The results were immediate, as frame rates in No Man’s Sky climbed from around 30 per second to a smooth 58, and processor temperatures dropped from 105 degrees Celsius down into the mid-eighties. Geekbench 6 scores followed suit, with multi-core performance up by around 10 percent and single-core gains exceeding 15 percent. With the chip staying cooler for longer, sustained performance improved noticeably across everyday tasks as well, and through all of it the MacBook Neo remained completely silent.
The first modification made it clear that the processor had significantly more headroom than Apple was allowing it to use. ETA Prime pushed things further by adding a small magnetic Peltier cooler powered through a USB-C cable drawing 50 watts. The device uses electricity to generate a cold side capable of dropping below freezing, cold enough to form ice on the surface during testing, while liquid channels carry the heat away on the other side. A simple adapter clamped the whole thing firmly against the copper plate already in place.
Temperatures dropped again, settling into the mid-seventies under the same gaming load and returning to just above room temperature at idle. The benchmarks told a compelling story. Geekbench 6 single core scores were up 17.5 percent over stock and multi core climbed 18.5 percent, while Cinebench showed similar gains of around 24 percent single core and 19 percent multi core. No Man’s Sky held a steady 80 frames per second over a 30 minute session, and Fallout 4 ran at a smooth 60 frames per second on just 8GB of RAM with the help of compatibility software and storage swap support.
The entire project remained reversible at every stage, with the copper sheet and external cooler leaving no permanent mark on the hardware. The only real cost was the extra power draw from the Peltier unit, and the performance gains made that a very easy trade to justify. A laptop that was never intended for gaming suddenly becomes a surprisingly capable one. [Source]
You must be logged in to post a comment Login