Connect with us
DAPA Banner

Tech

AI in higher education and the ‘erosion’ of learning

Published

on

Prof Nir Eisikovits and Jacob Burley of the University of Massachusetts Boston discuss the ethics of AI in higher education and the technology’s role in ‘cognitive offloading’.

Click here to visit The Conversation.

A version of this article was originally published by The Conversation (CC BY-ND 4.0)

Public debate about artificial intelligence in higher education has largely orbited a familiar worry: cheating. Will students use chatbots to write essays? Can instructors tell? Should universities ban the tech? Embrace it?

These concerns are understandable. But focusing so much on cheating misses the larger transformation already underway, one that extends far beyond student misconduct and even the classroom.

Advertisement

Universities are adopting AI across many areas of institutional life. Some uses are largely invisible, like systems that help allocate resources, flag ‘at-risk’ students, optimise course scheduling or automate routine administrative decisions. Other uses are more noticeable. Students use AI tools to summarise and study, instructors use them to build assignments and syllabuses, and researchers use them to write code, scan literature and compress hours of tedious work into minutes.

People may use AI to cheat or skip out on work assignments. But the many uses of AI in higher education, and the changes they portend, beg a much deeper question: As machines become more capable of doing the labour of research and learning, what happens to higher education? What purpose does the university serve?

Over the past eight years, we’ve been studying the moral implications of pervasive engagement with AI as part of a joint research project between the Applied Ethics Center at UMass Boston and the Institute for Ethics and Emerging Technologies. In a recent white paper, we argue that as AI systems become more autonomous, the ethical stakes of AI use in higher ed rise, as do its potential consequences.

As these technologies become better at producing knowledge work – designing classes, writing papers, suggesting experiments and summarising difficult texts – they don’t just make universities more productive. They risk hollowing out the ecosystem of learning and mentorship upon which these institutions are built, and on which they depend.

Advertisement

Nonautonomous AI

Consider three kinds of AI systems and their respective impacts on university life.

AI-powered software is already being used throughout higher education in admissions review, purchasing, academic advising and institutional risk assessment. These are considered ‘nonautonomous’ systems because they automate tasks, but a person is ‘in the loop’ and using these systems as tools.

These technologies can pose a risk to students’ privacy and data security. They also can be biased. And they often lack sufficient transparency to determine the sources of these problems. Who has access to student data? How are ‘risk scores’ generated? How do we prevent systems from reproducing inequities or treating certain students as problems to be managed?

These questions are serious, but they are not conceptually new, at least within the field of computer science. Universities typically have compliance offices, institutional review boards and governance mechanisms that are designed to help address or mitigate these risks, even if they sometimes fall short of these objectives.

Advertisement

Hybrid AI

Hybrid systems encompass a range of tools, including AI-assisted tutoring chatbots, personalised feedback tools and automated writing support. They often rely on generative AI technologies, especially large language models. While human users set the overall goals, the intermediate steps the system takes to meet them are often not specified.

Hybrid systems are increasingly shaping day-to-day academic work. Students use them as writing companions, tutors, brainstorming partners and on-demand explainers. Faculty use them to generate rubrics, draft lectures and design syllabuses. Researchers use them to summarise papers, comment on drafts, design experiments and generate code.

This is where the ‘cheating’ conversation belongs. With students and faculty alike increasingly leaning on technology for help, it is reasonable to wonder what kinds of learning might get lost along the way. But hybrid systems also raise more complex ethical questions.

One has to do with transparency. AI chatbots offer natural-language interfaces that make it hard to tell when you’re interacting with a human and when you’re interacting with an automated agent. That can be alienating and distracting for those who interact with them. A student reviewing material for a test should be able to tell if they are talking with their teaching assistant or with a robot.

Advertisement

A student reading feedback on a term paper needs to know whether it was written by their instructor. Anything less than complete transparency in such cases will be alienating to everyone involved and will shift the focus of academic interactions from learning to the means or the technology of learning. University of Pittsburgh researchers have shown that these dynamics bring forth feelings of uncertainty, anxiety and distrust for students. These are problematic outcomes.

A second ethical question relates to accountability and intellectual credit. If an instructor uses AI to draft an assignment and a student uses AI to draft a response, who is doing the evaluating, and what exactly is being evaluated? If feedback is partly machine-generated, who is responsible when it misleads, discourages or embeds hidden assumptions? And when AI contributes substantially to research synthesis or writing, universities will need clearer norms around authorship and responsibility – not only for students, but also for faculty.

Finally, there is the critical question of cognitive offloading. AI can reduce drudgery, and that’s not inherently bad. But it can also shift users away from the parts of learning that build competence, such as generating ideas, struggling through confusion, revising a clumsy draft and learning to spot one’s own mistakes.

Autonomous agents

The most consequential changes may come with systems that look less like assistants and more like agents. While truly autonomous technologies remain aspirational, the dream of a researcher ‘in a box’ – an agentic AI system that can perform studies on its own – is becoming increasingly realistic.

Advertisement

Agentic tools are anticipated to ‘free up time’ for work that focuses on more human capacities like empathy and problem-solving. In teaching, this may mean that faculty may still teach in the headline sense, but more of the day-to-day labour of instruction can be handed off to systems optimised for efficiency and scale. Similarly, in research, the trajectory points toward systems that can increasingly automate the research cycle. In some domains, that already looks like robotic laboratories that run continuously, automate large portions of experimentation and even select new tests based on prior results.

At first glance, this may sound like a welcome boost to productivity. But universities are not information factories; they are systems of practice. They rely on a pipeline of graduate students and early-career academics who learn to teach and research by participating in that same work. If autonomous agents absorb more of the ‘routine’ responsibilities that historically served as on-ramps into academic life, the university may keep producing courses and publications while quietly thinning the opportunity structures that sustain expertise over time.

The same dynamic applies to undergraduates, albeit in a different register. When AI systems can supply explanations, drafts, solutions and study plans on demand, the temptation is to offload the most challenging parts of learning. To the industry that is pushing AI into universities, it may seem as if this type of work is ‘inefficient’ and that students will be better off letting a machine handle it. But it is the very nature of that struggle that builds durable understanding. Cognitive psychology has shown that students grow intellectually through doing the work of drafting, revising, failing, trying again, grappling with confusion and revising weak arguments. This is the work of learning how to learn.

Taken together, these developments suggest that the greatest risk posed by automation in higher education is not simply the replacement of particular tasks by machines, but the erosion of the broader ecosystem of practice that has long sustained teaching, research and learning.

Advertisement

An uncomfortable inflection point

So what purpose do universities serve in a world in which knowledge work is increasingly automated?

One possible answer treats the university primarily as an engine for producing credentials and knowledge. There, the core question is output: Are students graduating with degrees? Are papers and discoveries being generated? If autonomous systems can deliver those outputs more efficiently, then the institution has every reason to adopt them.

But another answer treats the university as something more than an output machine, acknowledging that the value of higher education lies partly in the ecosystem itself. This model assigns intrinsic value to the pipeline of opportunities through which novices become experts, the mentorship structures through which judgement and responsibility are cultivated, and the educational design that encourages productive struggle rather than optimising it away. Here, what matters is not only whether knowledge and degrees are produced, but how they are produced and what kinds of people, capacities and communities are formed in the process. In this version, the university is meant to serve as no less than an ecosystem that reliably forms human expertise and judgement.

In a world where knowledge work itself is increasingly automated, we think universities must ask what higher education owes its students, its early-career scholars and the society it serves. The answers will determine not only how AI is adopted, but also what the modern university becomes.

Advertisement

The Conversation

By Prof Nir Eisikovits and Jacob Burley

Nir Eisikovits is a professor of philosophy and founding director of the Applied Ethics Center at the University of Massachusetts Boston. Eisikovits’s research focuses on the ethics of war and the ethics of technology and he has written many books and articles on these topics.

Jacob Burley is a junior research fellow at the University of Massachusetts Boston, specialising in the ethics of emerging technologies. His work explores how artificial intelligence reshapes human decision-making, responsibility and knowledge practices, with particular attention to the normative and epistemic challenges posed by increasingly autonomous systems.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Samsung’s upcoming Galaxy foldables could get a charging speed boost

Published

on

Samsung’s next generation of foldable phones could bring some changes to charging, though not all of them might be what fans are hoping for. According to recent certification listings spotted via SammyGuru, upcoming devices like the Galaxy Z Fold 8 and a new “Wide Fold” variant have appeared on China’s 3C database, hinting at potential updates to charging capabilities.

These listings typically reveal wired charging specs ahead of launch, making them an early indicator of what to expect. But here’s the catch: the “upgrade” might not be as big as it sounds.

What do the leaks actually reveal?

Two upcoming devices, SM-F9710 and SM-F9760, are believed to be the Chinese variants of the Galaxy Z Fold 8 and a new “Galaxy Z Wide Fold.” These listings show support for 15V at 3A charging, which translates to 45W wired charging. If accurate, that would mark a noticeable jump over previous Fold models, which have typically been limited to 25W wired charging.

However, a separate listing for what’s believed to be the Galaxy Z Flip 8 shows 9V at 2.77A (~25W) charging, essentially unchanged from its predecessor. So while the Fold lineup may finally see a boost, the Flip series appears to be sticking with the same charging speeds for now.

How big of an upgrade is this?

For the Fold lineup, this is actually a meaningful upgrade. Samsung has stuck with 25W charging for years, so moving to 45W would finally bring it closer to its Galaxy S Ultra devices and noticeably cut down charging times. That said, these numbers only apply to wired charging, as 3C listings don’t reveal wireless speeds.

For buyers, this is a welcome but uneven improvement. The Fold 8 and Wide Fold could see a solid boost, while the Flip 8 may remain unchanged, creating a clear divide in the lineup. It’s a step in the right direction, but not quite the full upgrade many were hoping for. Especially when you already have players like OnePlus and other Chinese brands that go well beyond 100W.

Advertisement

Source link

Continue Reading

Tech

Tesla’s Terafab Brings Manufacturing Power to Match the Scale of Space

Published

on

Elon Musk Terafab Tesla Largest Chip Factory
Elon Musk made a game-changing announcement hours ago when he revealed plans for Tesla’s Terafab during a live event, taking its work on vehicles and robots literally out of this world. The initiative is a game changer, bringing together SpaceX and xAI to create the world’s largest chip factory. The sheer scale of the operation is mind-boggling, since Terafab will be capable of producing 1 trillion watts of finished chips every year, all under one gigantic roof that will house logic circuits, memory storage, and final packaging.



All of this is important because we desperately need a reliable mechanism to generate solar energy that can be beamed back from space. Terafab is specifically built to accomplish just that. We’re talking about launching an incredible 100 million tons of capture equipment into orbit EVERY YEAR. To accomplish this, we must be able to move millions of tons of material year after year. Once in orbit, solar-powered satellites will conduct all of the AI heavy lifting, with millions of Tesla Optimus robots on hand to erect and maintain those structures well above the good old earth.

Each of those Optimus robots is a significant undertaking, as they require between 100 and 200 billion watts of chips just to function. When you factor in the satellites, you can see the tremendous demand we’re talking about: trillions of watts of chips that no existing chip manufacturer can possibly offer, at least not yet. According to projections, we will have the same shortage until 2030.


That is where Terafab comes in, since it is specifically designed to bridge that gap, with the kind of huge capacity that has the ability to overcome the hurdles that have been holding back both ground-based robot fleets and processing power in orbit. To get it erected, the construction team will use established launch techniques to transport the enormous cargo into place. To get the factory up and running, robots that are already in development will take on assembly tasks that are simply too dangerous for humans to do on a regular basis. As a result, we will have a consistent supply of chips to meet our rising requirements on Earth and beyond.

Advertisement

The driving factor behind all of this is a strong desire to explore the universe, not just envision what’s out there, but to experience it firsthand. As one of the speakers put it, “understanding comes only from direct experience out there in the universe,” and Terafab is the first step in translating that idea into something concrete, something that anyone can track, from the start of creation to the end of delivery.

Advertisement

Source link

Continue Reading

Tech

Reworked Apple Watch avoids ban, but Masimo battle escalates

Published

on


The decision, made public on Thursday, concludes that Apple’s latest implementation of pulse-oximetry functionality falls outside the scope of Masimo’s asserted rights. The full ITC commission will now review the judge’s ruling and decide whether to adopt it – a step that will determine whether the redesigned watches remain protected…
Read Entire Article
Source link

Continue Reading

Tech

Daily Deal: The 2026 C# Course Bundle

Published

on

from the good-deals-on-cool-stuff dept

The 2026 C# Course Bundle offers 8 courses that cover everything C#. You’ll master the fundamentals, explore object-oriented programming, and start building your own apps in no time. It’s on sale for $40.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Filed Under: daily deal

Source link

Advertisement
Continue Reading

Tech

‘We should regard it as a privilege to be stepping stones to higher things’: How Arthur C Clarke predicted the rise of AGI and the looming demise of humanity back in 1964

Published

on

While debate over the timeline – or even the potential – for artificial general intelligence (AGI) rages on in 2026, one futurist may have predicted the breakthrough more than 60 years ago.

Noted British science fiction writer and futurist Arthur C. Clarke touted the arrival of AGI during an interview at the 1964 World’s Fair in New York City.

Source link

Continue Reading

Tech

This monitor claims paper-like viewing and huge energy savings by using ambient light instead of relying entirely on traditional backlighting

Published

on


  • Hannspree Hybri monitor uses ambient light to significantly reduce energy consumption
  • Reflective display design aims to mimic paper-like readability and comfort
  • Automatic switching enables backlight use in low ambient light conditions

The Hannspree Hybri monitor attempts to merge paper-like readability with modern display performance, claiming an 80% reduction in energy use through innovative use of ambient light.

At illumination levels above 1000lux, common in offices, classrooms, and outdoor-adjacent spaces, the monitor reflects surrounding light instead of relying solely on a backlight.

Source link

Advertisement
Continue Reading

Tech

Reddit wants to check if you’re using the iPhone’s Face ID camera

Published

on

Reddit may soon ask users to prove they’re human, and it might involve your face. During a TBPN podcast, Reddit’s CEO, Steve Huffman, confirmed that the platform is exploring new identity verification methods, including using Face ID or Touch ID-style authentication, to tackle its growing bot problem.

RDDT requiring Face ID was not something I had on my bingo card but something has got to be done about all the fake / botted content — I just don’t know how to sell face-scanning to redditors or even lurkers. https://t.co/7e7K3Di4ip

— Alexis Ohanian 🗽 (@alexisohanian) March 21, 2026

The idea is simple: as AI-generated accounts become more convincing, Reddit wants stronger ways to confirm that users are real people and not bots pretending to be one.

Why is Reddit considering Face ID-style verification?

Unfortunately, bots are getting too good. Huffman has previously emphasized keeping the platform “human,” and this move fits right into that strategy. AI-generated content and automated accounts are becoming harder to detect, making moderation more challenging and threatening the authenticity of discussions.

Advertisement

As such, verification methods like Face ID or biometric checks could act as a quick way to confirm a real person is behind an account, without requiring traditional ID uploads. But of course, it’s not that simple.

So… are we really scanning faces now?

Reddit isn’t going full sci-fi just yet. The company is still “weighing” its options, which could mean optional verification for certain features, regions, or accounts rather than forcing everyone to scan their face. We’ve already seen a preview of this in places like the UK, where Reddit uses selfies or ID checks for age verification.

The next step could make things feel a lot more seamless and a bit more invasive. Instead of uploading IDs, Reddit may lean on device-level tools like Face ID to confirm you’re human, turning verification into something that happens in the background rather than a full process. Of course, that’s where things get messy.

Biometric checks raise big questions around privacy, data security, and consent, and users aren’t exactly thrilled about handing over their face to prove they’re not a bot. Reddit may be solving one problem, but it opens up another: how much verification is too much? Especially on a platform where anonymity is kind of the whole point?

Source link

Advertisement
Continue Reading

Tech

Google isn't backing away from Pentagon AI work, it's doubling down

Published

on


According to Business Insider, the issue came up during a January Google DeepMind town hall, where VP of Global Affairs Tom Lue said the company was “leaning more” into national security work.
Read Entire Article
Source link

Continue Reading

Tech

Scientists find all five genetic building blocks for life in asteroid Ryugu

Published

on


Researchers are still studying samples of Ryugu collected by the Japanese Aerospace Exploration Agency from its Hayabusa2 mission. After the first papers focused on the composition of the recovered material, a Japanese team has now found a “complete” set of genetic bases belonging to both DNA and RNA.
Read Entire Article
Source link

Continue Reading

Tech

8Today’s NYT Strands Hints, Answer and Help for March 22 #749

Published

on

Looking for the most recent Strands answer? Click here for our daily Strands hints, as well as our daily answers and hints for The New York Times Mini Crossword, Wordle, Connections and Connections: Sports Edition puzzles.


Today’s NYT Strands puzzle is an intriguing one. It helps if you know a little bit about famous products throughout history. Some of the answers are difficult to unscramble, so if you need hints and answers, read on.

I go into depth about the rules for Strands in this story

Advertisement

If you’re looking for today’s Wordle, Connections and Mini Crossword answers, you can visit CNET’s NYT puzzle hints page.

Read more: NYT Connections Turns 1: These Are the 5 Toughest Puzzles So Far

Hint for today’s Strands puzzle

Today’s Strands theme is: Trademarked no more

Advertisement

If that doesn’t help you, here’s a clue: Brand names that became generic terms.

Clue words to unlock in-game hints

Your goal is to find hidden words that fit the puzzle’s theme. If you’re stuck, find any words you can. Every time you find three words of four letters or more, Strands will reveal one of the theme words. These are the words I used to get those hints but any words of four or more letters that you find will work:

  • SPIT, SPITE, SPITES, SPITS, PIER, PIERS, GAME, SAME, POPE, POPES, GASP

Answers for today’s Strands puzzle

These are the answers that tie into the theme. The goal of the puzzle is to find them all, including the spangram, a theme word that reaches from one side of the puzzle to the other. When you have all of them (I originally thought there were always eight but learned that the number can vary), every letter on the board will be used. Here are the nonspangram answers:

  • ZIPPER, ASPIRIN, THERMOS, DUMPSTER, ESCALATOR

Today’s Strands spangram

completed NYT Strands puzzle for March 22, 2026

The completed NYT Strands puzzle for March 22, 2026.

NYT/Screenshot by CNET

Today’s Strands spangram is GENERICTERM. To find it, start with the G that is three letters down on the far-left row, and wind across and then up again.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025