Connect with us
DAPA Banner

Tech

Nvidia rival Cerebras raises $1bn at $23bn valuation

Published

on

Cerebras raised $1.1bn in a previous round last September at an $8.1bn post-money valuation.

Cerebras Systems, the AI chipmaker aiming to rival Nvidia, has raised $1bn in a Series H round led by Tiger Global with participation from AMD. The raise values the company at around $23bn, nearly triple the valuation made a little over four months ago.

Other backers in this round include Benchmark; Fidelity Management & Research Company; Atreides Management; Alpha Wave Global; Altimeter; Coatue; and 1789 Capital, among others.

The new round comes after Cerebras raised $1.1bn last September at an $8.1bn post-money valuation backed by several of the same investors.

Advertisement

Just days later, the company withdrew from a planned initial public offering (IPO) without providing an official reason. At the time of the IPO filing in 2024, there was criticism around its heavy reliance on a single United Arab Emirates-based customer, the Microsoft-backed G42.

Cerebras still intends to go IPO as soon as possible, it said.

The recent raise better positions the company to compete with global AI chip leader Nvidia. Cerebras claims that it builds the “fastest AI infrastructure in the world” and company CEO Andrew Feldman has also gone on record to say that his hardware runs AI models multiple times faster than that of Nvidia’s.

Cerebras is behind WSE-3, touted to be the “largest” AI chip ever built, with 19-times more transistors and 28-times more compute that the Nvidia B200, according to the company.

Advertisement

The company has a close connection with OpenAI, according to statements made by both Feldman and OpenAI chief Sam Altman – who happens to be an early investor in the chipmaker. Last month, the two announced a partnership to deploy 750MW of Cerebras’s wafer-scale systems to make OpenAI’s chatbots faster.

OpenAI – a voracious user of Nvidia’s AI technology – has been in search of alternatives,  although that’s not to say that OpenAI is backing down from using Nvidia technology in the future.

Last year, OpenAI drew up a 6GW agreement with AMD to power its AI infrastructure. The first 1GW deployment of AMD Instinct MI450 GPUs is set to begin in the second half of 2026.

At the time of the announcement, Altman said that the deal was “incremental” to OpenAI’s work with Nvidia. “We plan to increase our Nvidia purchasing over time”, he added.

Advertisement

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

How Automakers Are Using Blockchain Tech, And Why It’s So Useful

Published

on





You don’t hear much about blockchain these days. Back in the late 2010s, when everyone was talking about NFTs and cryptocurrency, companies were keen to put “blockchain” front and center on their press releases. “Look at us,” they were saying, “we’re embracing modern technology.” But after the sad evolution of cryptocurrency, brands seemed to decide they didn’t need the baggage that came with the word “blockchain.” But that doesn’t mean it’s gone away — just that companies are likely to call it something different now. You’re just more likely to hear things being referred to as distributed ledgers or “on-chain” tech.

According to the cryptocurrency exchange Coinbase, 60% of Fortune 500 companies are working on blockchain initiatives. The sectors that use blockchain the most are banking and finance, which account for around 20% of its use, but it’s used across all types of business, including the automotive industry.

Before we look at how carmakers use blockchain, it’s useful to understand what exactly blockchain is. At its most basic, blockchain is a shared digital record that isn’t controlled by any single company or authority. Instead, identical copies are stored across a network of computers, and new information is added in secure, time-stamped “blocks” that are linked together. Because each new entry is verified by the network and connected to what came before it, the record is very difficult to alter or tamper with. This immutability makes it useful for automakers who are looking to provide things like digital battery passports and vehicle provenance. However, some car manufacturers are planning to take the tech even further.

Advertisement

Blockchain is used to store records about supply chains and provenance

Blockchain is useful when it comes to storing digital battery passports. These are electronic records tracking the lifecycle of an EV battery and are going to be required in all countries in the European Union by 2027. This regulation affects all automakers who are selling into Europe — including those headquartered in the United States. Automakers need traceability data, and supply chains are international. A modern electric vehicle battery isn’t a single bill of materials so much as a web of upstream mining, refining, processing, cell manufacturing, pack assembly, recycling, and logistics. A blockchain-powered distributed ledger can serve as one definitive record of permissions and provenance that can be shared by different companies.

In June 2024, Volvo Cars launched what it claimed to be the world’s first EV battery passport for its EX90 SUV. The passport uses blockchain to record information such as the origins of raw materials, recycled content, and carbon footprint. Volvo plans to expand the scheme to more of its cars. Meanwhile, Tesla has implemented blockchain solutions to trace the provenance of cobalt in its supply chains. Hyundai and Kia developed an Integrated Greenhouse Gas Information System (IGIS), using blockchain to record emissions across the whole lifecycle of a vehicle.

Advertisement

Another use for blockchain is providing proof of provenance for collectible cars. Porsche is utilizing its unalterable nature to launch a blockchain-based digital passport pilot for classic cars, as well as other collectibles like watches or paintings. Automakers aren’t the only ones using blockchain for car records. In July 2024, Reuters reported that the California Department of Motor Vehicles had digitized 42 million car titles using blockchain technology to detect fraud and streamline title transfers.

Advertisement

Other uses for blockchain in the automotive industry

One of the main uses of blockchain in the automotive industry is handling companies’ finances. For example, BMW uses a blockchain system from JPMorgan to handle international financial transactions automatically. However, some pilots and plans suggest that there may be more innovative uses in the future. The much-hyped — but still not yet available — Sony/Honda Afeela EV sedan promises an “on-chain mobility service platform leveraging a token-based incentive model.” Details are still pretty fuzzy, but it does indicate another use for blockchain in the automotive industry, even if it is just persuading people to share their data by giving them cryptocurrency. Nissan is proposing something similar with its Nissan Passport, which it describes as a “digital certificate that expands the range of experiences you can access based on your actions.”

Toyota, the world’s largest automaker, is betting big on blockchain tech and has its own “Blockchain Lab” exploring how blockchain could be used to give vehicles a secure digital identity, bundle fleets into investable portfolios, and make it easier to attract funding for things like electric vehicle fleets and new mobility services. It is proposing a new blockchain-based protocol called the Mobility Orchestration Network (MON), which would link vehicles with other agencies, like regulators, on one all-encompassing digital platform. Toyota’s interest in blockchain goes beyond car manufacturing. It created Woven City, a blockchain-integrated smart city, in September 2025. The goal here is to use blockchain as a trusted digital system that lets people safely share vehicles, electricity, and city services without needing middlemen or paperwork.

Advertisement



Source link

Continue Reading

Tech

Hackers now exploit critical F5 BIG-IP flaw in attacks, patch now

Published

on

F5

​Cybersecurity firm F5 Networks has reclassified a BIG-IP APM denial-of-service (DoS) vulnerability as a critical-severity remote code execution (RCE) flaw, warning that attackers are exploiting it to deploy webshells on unpatched devices.

BIG-IP APM (short for Access Policy Manager) is a centralized access management proxy solution that enables admins to secure and manage user access to their organizations’ networks, cloud, applications, and application programming interfaces (APIs).

Tracked CVE-2025-53521, this security flaw can be exploited by attackers without privileges to perform remote code execution when targeting BIG-IP APM systems with access policies configured on a virtual server.

In addition to flagging the vulnerability as being exploited in the wild, F5 published indicators of compromise (IOCs) and advised defenders to check their BIG-IP systems’ disks, logs, and terminal history for signs of malicious activity.

Advertisement

“This known vulnerability was previously categorized and remediated as a Denial-of-Service (DoS) vulnerability. Due to new information obtained in March 2026, the original vulnerability is being re-categorized to an RCE. The original CVE remediation has been validated to address the RCE in the fixed versions. We have learned that this vulnerability has been exploited in the vulnerable BIG-IP versions,” F5 warned in an advisory update published this Sunday.

“F5 strongly recommends that you consult your corporate security policy for guidelines about incident handling procedures including but not limited to forensic best practices, that are specific to your organization. More specifically, review the policies to ensure that they comply with evidence collection and forensics procedures for a security incident before you attempt to recover the system,” the company added.

Internet threat-monitoring non-profit organization Shadowserver now tracks over 240,000 BIG-IP instances exposed online; however, there is no information on how many have a vulnerable configuration or have already been secured against CVE-2025-53521 attacks.

F5 BIG-IP exposed online
F5 BIG-IP systems exposed online (BleepingComputer)

​The U.S. Cybersecurity and Infrastructure Security Agency (CISA) also added the vulnerability to its list of actively exploited flaws on Friday and ordered federal agencies to secure their BIG-IP APM systems by midnight on Monday, March 30.

“This type of vulnerability is a frequent attack vector for malicious cyber actors and poses significant risks to the federal enterprise,” it warned.

Advertisement

“Apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable.”

In recent years, BIG-IP vulnerabilities have been exploited by nation-state and cybercrime threat groups to breach corporate networks, map internal servers, deploy data-wiping malware, hijack devices, and steal sensitive documents from victims’ networks.

F5 is a Fortune 500 technology giant that provides cybersecurity, application delivery networking (ADN), and various other services to more than 23,000 customers worldwide, including 48 of the Fortune 50 companies.

Automated pentesting proves the path exists. BAS proves whether your controls stop it. Most teams run one without the other.

This whitepaper maps six validation surfaces, shows where coverage ends, and provides practitioners with three diagnostic questions for any tool evaluation.

Advertisement

Source link

Continue Reading

Tech

Starcloud raises $170 million Series Ato build data centers in space

Published

on

Starcloud’s latest funding round values the space compute company at $1.1 billion, making it one of the fastest startups to reach unicorn status after graduating from Y Combinator.

The company’s Series A, which closed 17 months after its demo day presentation, was led by Benchmark and EQT Ventures. It’s another sign of the interest in outsourcing data centers to orbit as resource and political obstacles slow their development on Earth, but the business model depends on unproven technology and significant capital expenditure.

Starcloud has now raised a total of $200 million, and launched its first satellite with an Nvidia H100 GPU in November 2025. The company will launch a more powerful version, Starcloud 2, later this year with multiple GPUs, including an Nvidia Blackwell chip and an AWS server blade, as well as a bitcoin mining computer.

The company will also begin developing a data center spacecraft designed to launch from Starship, the reusable heavy lift rocket being built by Elon Musk’s SpaceX. Starcloud 3, as the spacecraft is named, will be a 200 kilowatts, three-ton spacecraft that fits the “pez dispenser” system SpaceX designed to deploy its Starlink satellites from Starship.

Advertisement

CEO and founder Philip Johnston said he expects that will be the first orbital data center that is cost-competitive with terrestrial data centers, with costs on the order of $.05 per kw/hour of power — if commercial launch costs land around $500 per kilogram.

The challenge is that Starship isn’t flying yet; Johnston says he expects commercial access to open up in 2028 and 2029. That’s the reality facing all the big space data center projects: powerful space computers will be cost-prohibitive until a new generation of rockets starts launching at a high operational cadence, something that might not happen until the 2030s.

“If it ends up being delayed, we’ll just carry on launching the smaller versions on Falcon 9,” Johnston said. “We’re not going to be competitive on energy costs until Starship is flying frequently.”

Techcrunch event

Advertisement

San Francisco, CA
|
October 13-15, 2026

“There’s kind of two business models,” Johnston explains: One is selling processing power to other spacecraft on orbit; the company’s first satellite, for example, analyzes data collected by Capella Space’s radar spacecraft. Then, in the future when launch costs go down, more powerful distributed data centers could potentially pull work from their terrestrial counterparts.

Advertisement

That gets at how new this industry really is. When Nvidia CEO Jensen Huang unveiled the company’s Vera Rubin Space-1 chip modules at his company’s annual GPU Technology Conference last week, he didn’t note that none had been produced or shared with the company’s development partners. 

In fact, the number of advanced GPUs on orbit is numbered in the dozens, while Nvidia is estimated to have sold nearly 4 million to terrestrial hyperscalers in 2025. 

Or consider that SpaceX’s Starlink communications network, the largest satellite network in orbit with 10,000 spacecraft, produces something around 200 megawatts of energy, while data centers with more than 25 gigawatts of power are currently under construction in the U..S, according to Cushman and Wakefield. 

Johnston argues that his company is well ahead of the competition, with the first terrestrial GPU deployed in orbit. It was used to train an AI model in orbit, a first, according to Starcloud, and run a version of Gemini. Beyond the performance, Johnston says Starcloud now has valuable data about what it takes to run a powerful chip in space.

Advertisement

“An H100 is probably not the best chip for space, to be honest, but the reason we did it is we wanted to prove that we could run state of the art terrestrial chips in space,” he told TechCrunch. That hard-won knowledge —another GPU, an Nvidia A6000, failed during launch — will influence future designs.

There is a laundry list of technical challenges to be solved, including efficient power generation and cooling the hot-running chips. Starcloud-2 will have the largest deployable radiator flown on a private satellite; he expects at least two additional versions of that spacecraft will head to orbit, Johnston said.

Then there is the challenge of synchronization. The largest datacenter workloads, often for training, require hundreds or thousands of GPUs to work in tandem. Doing that in space will either require fantastically large spacecraft, or powerful and reliable laser links between spacecraft flying in formation. Most companies working on this technology expect those workloads to come long after simpler inference tasks take place on orbit.

Besides Starcloud, Aetherflux, Google’s Project Suncatcher, and Aethero — which launched Nvidia’s first space-based Jetson GPU in 2025 — are all developing space data center businesses. 

Advertisement

The elephant in the room is SpaceX itself, which has asked the U.S. government for permission to build and operate a million satellites for distributed compute in space.

Going head-to-head with SpaceX is a daunting task for any entrepreneur, but Johnston sees room for coexistence.

“They are building for a slightly different use case than us,” he told TechCrunch. “They’re mainly planning on serving Grok and Tesla workloads. It may be at some point that they offer a third party cloud service, but what I think they are unlikely to do is what we’re doing [as] an energy and infrastructure player.”

Source link

Advertisement
Continue Reading

Tech

The Hazards Of Charging USB-C Equipped Cells In-Situ

Published

on

Can you charge those Li-ion based cells with USB-C charging ports without taking them out of the device? While this would seem to be answered with an unequivocal ‘yes’, recently [Colin] found out that this could easily have destroyed the device they were to be installed in.

After being tasked with finding a better way to keep the electronics of some exercise bikes powered than simply swapping the C cells all the time, [Colin] was led to consider using these Li-ion cells in such a manner. Fortunately, rather than just sticking the whole thing together and calling it a day, he decided to take some measurements to satisfy some burning safety questions.

As it turns out, at least the cells that he tested – with a twin USB-C connector on a single USB-A – have all the negative terminals and USB-C grounds connected. Since the cells are installed in a typical series configuration in the device, this would have made for an interesting outcome. Although you can of course use separate USB-C leads and chargers per cell, it’s still somewhat disconcerting to run it without any kind of electrical isolation.

Advertisement

In this regard the suggestion by some commentators to use NiMHs and trickle-charge these in-situ similar to those garden PV lights might be one of the least crazy solutions.

Advertisement

Source link

Continue Reading

Tech

ShinyHunters claim responsibility for European Commission breach

Published

on

Reportedly, the crime group accessed more than 350GB of stolen data related to data dumps of mail servers, databases, confidential documents, contracts and other sensitive material.

The extortion group ShinyHunters has been linked to the recent (24 March) breach of the European Commission’s Europa.eu platform, in which a reported 350GB of data, across multiple databases, was accessed and stolen. 

In a statement issued after the incident (27 March), the European Commission stated that their early findings suggest that private data has been accessed and Union entities affected by the attack will be contacted. The Commission’s internal systems are not believed to have been affected.

The Commission explained it will continue to monitor the situation, taking the necessary precautions to ensure the security of its systems and data, as well as work to analyse what happened so it can use the results to improve its cybersecurity capabilities.

Advertisement

While the Commission has not shared further details on the incident, alleged data dumps uploaded to ShinyHunters’ Tor data leak site are said to include content from mail servers, internal communications systems, databases, confidential documents, contracts and additional sensitive material. 90GB of information allegedly stolen from the European Commission’s compromised cloud network has already been shared. 

ShinyHunters are an extortion group established around 2020, who have carried out a number of high-profile, financially-motivated attacks on groups such as Salesforce, Allianz Life, SoundCloud and Ticketmaster. The criminal organisation also claimed responsibility for an attack on Match Group, which owns Tinder, Hinge, Meetic, Match.com and OkCupid. 

In July 2024, AT&T paid a member of the ShinyHunters hacking group $370,000 to delete the data of millions of customers following a massive data breach of its systems. Reportedly, the stolen data exposed the calls and texts of nearly all of the platform’s 110m cellular customers after ShinyHunters stole the information from the cloud data giant Snowflake.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Tech

This Is How Trump Is Already Threatening the Midterms

Published

on

The White House did not respond to a request for comment about the meetings, but an official who was not authorized to speak on the record, told WIRED at the time: “The White House does not comment on mysterious meetings with unnamed staffers.”

Simultaneously, Trump has also sought to absolve officials of any wrongdoing in the wake of the 2020 election. Last year, Trump gave “full, complete and unconditional” pardons to a slate of people who had tried, and failed, to help him overturn the 2020 election results. In recent months, Trump has pressured Colorado governor Jared Polis to release Tina Peters, the former county clerk in Mesa County, Colorado, who became a hero for the right’s election deniers when she facilitated a security breach during a software update of her county’s election management system.

Peters was found guilty of four felonies, but Trump has been mounting a campaign in recent months to get her released, even going so far as to say he “pardoned” her, even though he has no power to do so given she was convicted on state charges.

Election Day Interference

While Trump has not announced specific plans to deploy troops to polling locations or seize voting machines, he and his administration have certainly been suggesting that such action is not off the table.

Advertisement

In January, Trump lamented not having the National Guard seize certain voting machines after the 2020 election. In early February, White House press secretary Karoline Leavitt told reporters that while she hasn’t specifically heard Trump discussing the possibility, she couldn’t “guarantee that an ICE agent won’t be around a polling location in November.” (The question was in response to former White House adviser Steve Bannon stating: “We’re going to have ICE surround the polls come November. We’re not going to sit here and allow you to steal the country again … We will never again allow an election to be stolen.”)

Earlier this month, during his confirmation hearing to head up the Department of Homeland Security, Senator Markwayne Mullin said he would be willing to deploy ICE to polling locations to address “a specific threat.”

The result of the Trump administration’s drip feed of threats and dog whistles is that those who are running elections in states across the country are already war-gaming what happens if ICE or the National Guard show up at their voting locations.

Michael McNulty, the policy director at Issue One, a nonprofit that tracks the impact of money in politics, also points to the fact that the Department of Justice sent monitors to oversee elections in November in New Jersey and California, despite no federal elections being held. “The concern is that this could become a massive deployment of, quote unquote, observers by the DOJ in 2026 who might do something more, whether it’s intimidation, whether it’s interfering with local election officials, to get data to confirm conspiracy theories,” McNulty tells WIRED.

Advertisement

FBI Raids

On January 28, the FBI raided the election office in Fulton County, Georgia, executing a search warrant that allowed it to seize ballots, ballot images, tabulator tapes, and the voter rolls related to the 2020 election. The search warrant affidavit, unsealed a few weeks ago, shows that the FBI relied on the work of Kurt Olsen, a lawyer who was appointed by the administration to investigate election security in October and who has a long history of working with some of the country’s biggest election deniers, including Patrick Byrne, Mike Lindell, and Kari Lake. Olsen’s claims are based on debunked and previously investigated conspiracy theories about the 2020 election.

The raid was also notable for the presence of Tulsi Gabbard, the director of national intelligence, who is, according to The Guardian, running a parallel investigation into the 2020 election with the apparent tacit approval of Trump.

Source link

Advertisement
Continue Reading

Tech

Fiber HDMI cables enable full-bandwidth 8K over runs up to 990 feet

Published

on


The product is an active optical cable (AOC) for HDMI. Instead of relying solely on copper, it carries most of its signal over fiber-optic strands. Inside the cable, HDMI electrical signals are converted into optical signals for the journey between the two ends, then converted back to electrical signals at…
Read Entire Article
Source link

Continue Reading

Tech

In with a bang, out in silence — the end of the Mac Pro

Published

on

For almost two decades, the Mac Pro bounced between coveted and beloved, to derided and forgotten. Now, it’s finally over.

Silver computer tower with a handle, power button, and ventilation holes on the side.
Apple is reportedly pressing the off switch on the Mac Pro

All political careers end in failure, and all devices fade out as they are eventually superseded. Yet this time it’s more that the Mac Pro has been usurped, and possibly even stabbed in the back.
If you’re a Mac Pro fan, you know this day is coming, and you probably don’t want to believe it. It’s true that the Mac Pro has long lost its crown as the most powerful Mac, but still this is the legendary Mac Pro.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Is It Time For Open Source to Start Charging For Access?

Published

on

“It’s time to charge for access,” argues a new opinion piece at The Register. Begging billion-dollar companies to fund open source projects just isn’t enough, writes long-time tech reporter Steven J. Vaughan-Nichols:


Screw fair. Screw asking for dimes. You can’t live off one-off charity donations… Depending on what people put in a tip jar is no way to fund anything of value… [A]ccording to a 2024 Tidelift maintainer report, 60 percent of open source maintainers are unpaid, and 60 percent have quit or considered quitting, largely due to burnout and lack of compensation. Oh, and of those getting paid, only 26 percent earn more than $1,000 a year for their work. They’d be better paid asking “Would you like fries with that?” at your local McDonald’s…

Some organizations do support maintainers, for example, there’s HeroDevs and its $20 million Open Source Sustainability Fund. Its mission is to pay maintainers of critical, often end-of-life open source components so they can keep shipping patches without burning out. Sentry’s Open Source Pledge/Fund has given hundreds of thousands of dollars per year directly to maintainers of the packages Sentry depends on. Sentry is one of the few vendors that systematically maps its dependency tree and then actually cuts checks to the people maintaining that stack, as opposed to just talking about “giving back.”

Sentry is on to something. We have the Linux Foundation to manage commercial open source projects, the Apache Foundation to oversee its various open source programs, the Open Source Initiative (OSI) to coordinate open source licenses, and many more for various specific projects. It’s time we had an organization with the mission of ensuring that the top programmers and maintainers of valuable open source projects get a cut of the tech billionaire pie.

Advertisement

We must realign how businesses work with open source so that payment is no longer an optional charitable gift but a cost of doing business. To do that, we need an organization to create a viable, supportable path from big business to individual programmer. It’s time for someone to step up and make this happen. Businesses, open source software, and maintainers will all be better off for it.

One possible future… Bruce Perens wrote the original Open Source definition in 1997, and now proposes a not-for-profit corporation developing “the Post Open Collection” of software, distributing its licensing fees to developers while providing services like user support, documentation, hardware-based authentication for developers, and even help with government compliance and lobbying.

Source link

Continue Reading

Tech

From “Hello, World!” to AI: What Skills Actually Prepare Students for the Future?

Published

on

This article is part of the collection: Teaching Tech: Navigating Learning and AI in the Industrial Revolution.


A little over a decade ago, schools were swept into what many described as a movement to prepare students for the future of work. That work was coding — “Hello, world!”

Districts introduced new courses, nonprofits expanded access to computer science education and a growing ecosystem of programs promised to teach students the skills needed to enter the tech workforce. For many, it felt like a necessary correction to a rapidly digitizing world. But over time, a more complicated picture emerged.

While access to computer science education expanded, the relationship between early coding exposure and long-term workforce outcomes became uneven. The “learn to code” movement raised an important question that still lingers today: Which skills actually endure when technologies change? That question has resurfaced in a new form.

Advertisement

Today, generative AI is driving a similar wave of urgency. Schools are once again being encouraged to adapt quickly, often with the same underlying rationale that teachers must prepare students for a future shaped by emerging technologies.

But if the instructional role of AI remains unclear, and if the tools themselves are likely to evolve rapidly, the more persistent challenge may lie elsewhere.

After conducting a two-year research project alongside teachers, who are adapting and are open to integrating AI, we found that uptake is still minimal. Most of our participants, including those who are engineering or computer science teachers, still struggle to identify a clear or universal instructional use case for widespread AI integration.

So, what should students learn to help them adapt to whatever comes next?

Advertisement

A growing body of research suggests that the answer may lie not in teaching students how to use a particular AI system, but in helping them understand the computational ideas that make those systems possible.

The Limits of Teaching the Tool

In recent years, many discussions about AI education have centered on teaching students how to use generative tools effectively. Prompt engineering, for example, has become a common topic in professional development workshops and online tutorials.

Yet, focusing heavily on tool-specific skills can create a familiar educational problem, because technology changes faster than curricula.

Teaching students how to interact with a specific interface risks becoming the equivalent of teaching to standardized tests, rather than teaching students important lessons that don’t appear on state exams.

Advertisement

The history of computing education offers a useful example. In the early 2010s, a wave of coding initiatives encouraged schools to teach programming skills broadly. While many of those programs expanded access to computer science education, subsequent analysis showed that workforce pipelines in technology remained uneven, and many students learned tool-specific skills without developing deeper computational reasoning abilities.

That experience offers a cautionary lesson for the current AI moment. If the goal of integrating AI into education is long-term preparation for technological change, focusing narrowly on how to use today’s tools may not be the most durable strategy.

The Skill That Outlasts the Tool

A growing body of research suggests that computational thinking is a more durable educational objective.

Computational thinking refers to a set of problem-solving practices used in computer science and other analytical disciplines. These include:

Advertisement
  • breaking complex problems into smaller components
  • recognizing patterns
  • designing step-by-step processes
  • evaluating the outputs of automated systems

These skills apply not only to programming but also to fields ranging from engineering to public policy.

Importantly, they also help students understand how algorithmic systems operate.

When students learn computational thinking, they gain the ability to analyze how technologies like AI produce results rather than simply accepting those results as authoritative.

In this sense, computational thinking provides a conceptual bridge between traditional academic skills and emerging digital systems.

What Teachers Are Already Doing

Many teachers in our study were already moving in this direction, often without using the term computational thinking.

Advertisement

When teachers asked students to analyze chatbot errors, they were encouraging students to examine how algorithmic systems produce outputs. When they designed exercises comparing training data and algorithms to everyday processes, they were helping students reason about how automated systems work.

These approaches do not require students to rely heavily on AI tools themselves. Instead, they position AI as a case study for examining how technology shapes information.

That framing aligns with longstanding educational goals around critical thinking, media literacy and problem-solving.

Implications for Educators

If the instructional use case for generative AI remains uncertain, educators may benefit from focusing on skills that remain valuable regardless of which tools dominate in the future.

Advertisement

Several practical approaches are already emerging in classrooms. Teachers can use AI systems as objects of analysis, asking students to evaluate outputs, identify errors and investigate how models generate responses.

Lessons can connect AI to broader topics such as data quality, algorithmic bias and information reliability.

Assignments that emphasize reasoning, structured problem solving and evidence evaluation continue to support the kinds of cognitive work that remain central to learning.

These approaches allow students to engage with AI without allowing the technology to replace the thinking process itself.

Advertisement

Implications for EdTech Developers

The experiences teachers described also highlight an opportunity for edtech companies.

Many current AI tools were developed as general-purpose language systems and later introduced into education contexts. As a result, teachers are often left to determine whether and how those tools align with classroom learning goals. Future products may benefit from deeper collaboration with educators during the design process.

Teachers in our conversations were already experimenting with small classroom applications, designing AI literacy lessons and building course-specific chatbots.

These experiments resemble early-stage product development.

Advertisement

Partnerships between educators, edtech developers and product managers could help identify instructional problems that AI systems could realistically address.

The Next Phase of the Research

The conversations described in this series represent an early attempt to document how teachers are navigating the arrival of generative AI.

As schools continue experimenting with these tools, the next challenge will be to develop governance frameworks that help educators evaluate when and how AI should be used in learning environments.

Our research team is beginning the next phase of this work by partnering with school districts to develop guidance for AI governance and inviting edtech companies interested in exploring these questions collaboratively.

Advertisement

Rather than assuming that AI will inevitably transform classrooms, this phase of the project will focus on identifying the conditions under which AI tools actually support teaching and learning and how to reduce harm when they don’t.

The fourth grade teacher’s question remains a useful guide: What can I actually use this for in math?

Until the answer becomes clearer, many teachers will likely continue doing what professionals in any field do when new technologies appear: experimenting cautiously, adopting what works and relying on their judgment to decide where or if the tool belongs.


If your school, district, organization, or edtech company is interested in learning more about joining our next project on AI governance, contact our research team at research@edsurge.com.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025