Connect with us

Tech

We might finally get a smaller Dynamic Island with the iPhone 18 Pro

Published

on

Apple may be preparing to shrink the Dynamic Island on next year’s iPhone 18 Pro models.

According to Bloomberg, alongside several well-known leakers, the iPhone 18 Pro and Pro Max are expected to feature a smaller cut-out at the top of the display. However, the Dynamic Island itself isn’t going anywhere just yet.

Rumours around the Island’s future have been circulating for over a year. At one point, reports suggested Apple could move fully to under-display Face ID, ditching the Dynamic Island entirely in favour of a simple hole-punch camera. That now looks unlikely for 2026. Instead, the more consistent chatter in late 2025 and early 2026 points to a refinement rather than a removal.

Apple is reportedly planning to move the Face ID dot illuminator under the display, which would allow the visible cut-out to shrink. At the same time, improvements in front-facing camera miniaturisation could further reduce the space required. The front camera, infrared camera and dot projector are still expected to sit within the Dynamic Island. Therefore, the interactive software element would remain intact.

Advertisement

It’s worth noting that similar rumours surfaced ahead of the iPhone 17 Pro launch, only for the design to stay the same. However, when multiple independent sources begin aligning this close to a launch cycle, it typically suggests something is in motion. Even if the final change ends up subtle, this kind of consensus is unusual.

Advertisement

Long term, Apple is widely believed to be working toward a completely uninterrupted display — essentially a slab of glass with no visible cut-outs. That milestone could align with the iPhone’s 20th anniversary in 2027. For now, though, the iPhone 18 Pro looks set to take a smaller step forward rather than a dramatic leap.

Advertisement

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

New Law Would Demand ‘Firearm Blocking’ Tech In Every 3D Printer

Published

on





As 3D printers from a number of brands get better and less expensive, there’s always the question of 3D-printed guns. After all, 3D printers are showing up in combat roles. To counter this, at least in California, Assembly member Rebecca Bauer-Kahan introduced a bill that would mandate that every 3D printer sold in California be coded with “firearm blocking features designed to prevent the printing of dangerous gun parts and ghost guns.” 

The bill, AB 2047, states: “all 3D printers sold in California will be required to include firearm detection algorithms and software controls that identify files designed to produce guns and illegal gun parts, then block those printing requests.”

The definition of “ghost gun” varies, but it usually refers to firearms without serial numbers or easily traceable markings. 

Advertisement

According to the Bureau of Alcohol, Tobacco, and Firearms, it’s federally legal to make your own firearms and does not require a serial number as long as the firearm in question is not being sold for a profit and is “detectable” by metal detectors and X-rays. 

Advertisement

You mostly can’t print an entire gun

Speaking from personal experience from well over a decade of participating in shooting sports and gun-smithing, you cannot just find a file online and print a functioning gun like something out of a Tom Clancy novel. You can only print accessories and non-stressed parts of a gun, like the receiver of the popular Glock series of handguns or the lower receiver of an AR-15-style rifle. 

In order to make a firearm that functions without exploding from the pressure of a fired bullet, you still need a lot of conventional gun parts like barrels, slides, and trigger mechanisms. While each gun is different, receivers are often the only part of a firearm that requires a background check and cannot be purchased online without violating federal law. Still, printing a receiver without a background check is a valid concern.

3D printer bills like the one introduced in California are obviously well-meaning in their intent. No lawmaker wants to see their constituents hurt by potentially dangerous technology. But without knowledge of the problem with 3D-printed firearm components and concrete ways to actually program 3D printers to detect gun parts, the bill might not go very far.

Advertisement



Source link

Continue Reading

Tech

Aventon Soltera 3 Electric Bike Review: A Fun Hybrid Single-Speed

Published

on

Belt-drive bikes offer some huge upsides. First, they usually require less maintenance, with many belts often lasting twice as long as a typical chain. Second, there’s no grease to speak of, and therefore, no black smudges on your work pants. Third, in the case of the Soltera 3, the belt comes from the Gates brand, whose drivetrain belts are as good as it gets. Belt-drive bikes are silent and often smoother than their chain-driven counterparts.

That said, the inclusion of a low-maintenance element such as a belt drive paired with hydraulic disc brakes, which require bleeding roughly every year, struck me as an odd choice. If Aventon wanted to make the Soltera 3 as hands-off as possible, cable-actuated brakes would have been a more intuitive choice.

The other thing that immediately jumps out about the Soltera 3 is its relatively light weight. At 37 pounds, the Soltera 3 is heavy for an analog bike. But it’s certainly not heavy for an ebike, and it’s nearly as stiff, nimble, and navigable as a conventional bicycle. One issue I’ve always had with ebikes is their heft. Given that they’re often made to replace a car, they’re built with load bearing in mind. Also, ebike batteries are heavy.

Advertisement

Adding to that sense of “this is just like my other bikes,” the Soltera 3 simply looks cool, which is often not the case when it comes to ebikes. The matte black my tester bike arrived in looks cool because matte black almost never doesn’t look cool. (Additionally, the Soltera 3 is available in dark matte blue and a sleek silver.) But beyond the finish, the bike’s geometry; its wide, almost perfectly flat handlebars; and its narrow (by ebike standards) 700 x 36 tires make it feel closer in DNA to a road bike than a traditional ebike.

Button Press

Image may contain Electronics Screen Computer Hardware Hardware Monitor Body Part Finger Hand Person and Camera

Photograph: Michael Venutolo-Mantovani

I’m 6′4′′, and the extra large Soltera 3 that I tested was at a maximum saddle height. It was suitable for me, but I couldn’t recommend anyone bigger than me riding the Soltera 3. That said, with four sizes ranging from small to extra large, the line covers a wide swath of riders, ranging from my height all the way down to 5′ tall.

Source link

Advertisement
Continue Reading

Tech

Here’s your first look at Kratos and Atreus in Amazon’s upcoming God of War TV adaptation

Published

on

With the likes of  and  out of the way for a bit, Amazon has seized its opportunity to put the spotlight on the next big video game adaptation, its currently-in-production God of War show. Today we got our at Ryan Hurst and Callum Vinson as Kratos and Atreus.

The image released by Amazon shows the eponymous God of War standing next to a tree as he watches his son — who notably looks a bit younger than the video game version of 11-year-old Atreus we first met in 2018’s God of War — take aim with his bow. Exactly what they’re hunting is unclear, but we know that the developing relationship between father and son that was such a big part of the PS4 game is also going to be at the heart of the show.

Whether Sony Pictures Television and Amazon MGM Studios have nailed the looks of its central characters is a matter of opinion. Personally I think Hurst’s Kratos in particular looks a little bit off here, but there’s every chance it all comes together later in production. Or when we first hear him angrily exclaim “boy!”

The Sons of Anarchy star was as Kratos back in January, and earlier this week we learned that Deadpool’s will play Baldur in the Amazon show. The rest of the cast includes Mandy Patinkin as Odin, Max Parker and Heimfall, Ólafur Darri Ólafsson as Thor, as Sif, Alastair Duncan as Mimir, Jeff Gulka as Sindri and Danny Woodburn as Brok.

Advertisement

No release date has been announced yet, but a second season of God of War has been confirmed.

Source link

Continue Reading

Tech

Japan Introduces Buddharoid, an AI-Powered Humanoid Robot That Brings Buddhist Teachings into Physical Form

Published

on

Japan Buddharoid AI-Powered Humanoid Robot Buddhist Temple
Japan introduces Buddharoid, a humanoid robot that physically embodies Buddhist teachings, at a time when temples are having to cope with a monk shortage. Kyoto University researchers collaborated with tech companies Teraverse and X NOVA to build the system around China’s capable Unitree G1 humanoid robot. It has been clad in a basic grey robe and kept faceless to avoid bringing attention to its mechanical nature, allowing its quiet movements and steady voice to speak for itself.



Buddharoid glides almost as slowly as a monk through the monastery corridors at 5 a.m. It has a very courteous manner of bowing, uniting its hands in the traditional gassho prayer gesture, and then going away at a steady, measured pace, which is ideal for calm areas. Its fundamental movement patterns were derived from training on the Unitree hardware, and they were fine-tuned to reflect monastic behaviour rather than the mechanical efficiency of, say, a factory line.


Unitree G1 Humanoid Robot(No Secondary Development)
  • Height, width and thickness (standing): 1270x450x200mm Height, width and thickness (folded): 690x450x300mm Weight with battery: approx. 35kg
  • Total freedom (joint motor): 23 Freedom of one leg: 6 Waist Freedom: 1 Freedom of one arm: 5
  • Maximum knee torque: 90N.m Maximum arm load: 2kg Calf + thigh length: 0.6m Arm arm span: approx. 0.45m Extra large joint movement space Lumbar Z-axis…

The real work of Buddharoid is done by the AI system, BuddhaBot-Plus, an advanced language model that the developers trained on literally hundreds of Buddhist scriptures. From the main sutras to all of the specialist comments written over centuries. When someone asks Buddharoid about their anxiety, relationships, or larger issues like the meaning of life, it simply draws from all of those texts to provide sensible responses. It once recommended someone to take a serious look at their relationships and establish that inner balance to improve things.


Buddharoid made an appearance for journalists and visitors at Kyoto’s Shoren-in Temple. It went around the room, spoke in a calm and composed tone, and just engaged in conversation with individuals. Unlike some of the earlier temple robots, which were simply reciting recorded sermons, Buddharoid is intelligent enough to respond to real-time interaction. People ask it a variety of issues, ranging from everyday worries to larger social concerns, and it always responds through the prism of Buddhist knowledge.

Advertisement

Professor Seiji Kumagai, the project’s leader and a monk himself, has been promoting Buddharoid, and his team views the robot as a method to genuinely preserve access to Buddhist teachings in rural places or at temples that are struggling to locate the staff they require. The machine is helping to bridge the gap between digital Buddhism and the actual thing, and I believe that is where the true value lies. Visitors receive the impression that they’re dealing with something more than just a text-based chatbot.

Source link

Advertisement
Continue Reading

Tech

Samsung’s reason for Galaxy S26 price hike is a sign of bad things to come

Published

on

Every year, Samsung raises the bar on specs. This year, it raised something else instead — the price. The Galaxy S26 series landed this week, and all three models now start with 256GB of storage as the baseline. On paper that sounds like a win. In practice, the pricing tells a different story.

The Galaxy S26 starts at $899 for 256GB, versus $859.99 for the 256GB Galaxy S25 — but the more telling number is that the S25’s 128GB base was $799, meaning the cheaper entry point is simply gone now.

Revised introductory price for Galaxy S26

The S26 Plus comes in at $1,099 for 256GB, up from $999 on the S25 Plus. The Ultra holds at $1,299, matching last year’s price exactly — the one clean win in an otherwise uncomfortable lineup.

Samsung’s Won-Joon Choi, COO of its mobile business, told The Verge that the memory shortage alone made a “significant contribution” to the price hike, with tariffs secondary.nIt’s worth noting that Samsung manufactures its own memory. If they couldn’t absorb the cost, nobody can.

AI data centers are consuming global memory supply faster than consumer electronics can compete for it. Samsung, SK Hynix, and Micron are all pivoting capacity toward high-bandwidth memory for AI servers — better margins, bigger contracts.

Advertisement

RAMageddon hits the whole industry

What’s left for phones, laptops, and other consumer-grade products is shrinking and getting more expensive. IDC is projecting a 13% drop in global smartphone shipments for 2026 (via Bloomberg) — potentially worse than the pandemic dip.

PC makers including Lenovo, Dell, and ASUS have flagged 15–20% price increases ahead.

Memory costs aren’t expected to stabilize until mid-2027. Until then, every new device carries that weight — in sticker price, frozen specs, or both. Samsung just showed us what that looks like in practice. The rest of the industry is next.

Source link

Advertisement
Continue Reading

Tech

Get Apple's 15-inch MacBook Air M4 for $1,049 with this weekend deal

Published

on

Apple’s current 15-inch MacBook Air equipped with the M4 chip has dropped to $1,049 as Amazon competes for your business this weekend.

Two open MacBook Air laptops displaying colorful magazine art and business charts tilt toward each other against a dark background, with large gradient letters spelling DEALS above them
Grab weekend deals on Apple’s M4 MacBook Air.

The 15-inch M4 MacBook Air features a 10-core GPU, with the standard model also equipped with 16GB of unified memory and a 256GB SSD. Amazon is discounting the standard spec to $1,049, representing a 13% markdown off MSRP.
Buy 15″ MacBook Air for $1,049
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Wall Street Has AI Psychosis

Published

on

Before last week the name Alap Shah didn’t ring a bell for many people. The 45-year-old financial analyst and tech entrepreneur had spent the past two decades working in relative obscurity. Then last weekend he coauthored a blog with the research firm Citrini titled “The 2028 Global Intelligence Crisis.” It was a “thought exercise” about the impacts of artificial intelligence, and it predicted that in June of that year, AI would jack up unemployment past 10 percent and force the Dow down, down, down. Writing in a confident, Nostradamic tone—as if auditioning for starring roles in the next Michael Lewis book—the authors painted a picture of a flywheel in reverse: AI agents take jobs from workers, people spend less, and struggling corporations conduct layoffs on top of layoffs.

There wasn’t much in it that hadn’t been previously heard, or speculated about. Tech leaders like Anthropic CEO Dario Amodei have already estimated that half the entry level white collar jobs will soon be gone, and earlier this year, Anthropic’s release of new agentic tools spurred a Wall Street selloff. Nonetheless the report hit with the force of the blizzard blowing through lower Manhattan. When the closing chimes sounded on the New York Stock Exchange, the Dow was down 800 points. The name Alap Shah was now ringing bells.

The achievement is less impressive than it seems. Wall Street, like the rest of us, is in a persistent state of anxiety about AI, and it doesn’t take much to trigger a mini-panic. Financial markets don’t necessarily map to reality, but the jitters reflect a wider disquiet. The AI future is in a William Gibson zone—it’s here, but unevenly distributed—and the news from those already living in the agent-packed, AI code-writing universe is both exciting and unsettling. Emphasis on unsettling.

No one—no one!—knows exactly how AI will impact the economy, but clearly it will be significant. Right now stocks are soaring, so it seems to make sense to keep the party going. But then along comes the latest doom manifesto, or a paper indicating that a traditional business sector might be threatened by AI, and suddenly money managers are reminded that the biggest issue of our time is totally unresolved. Case in point: earlier this month, a tiny company (valuation under $6 million) that had previously sold karaoke machines pivoted to AI-powered shipping logistics and put out a report saying that it had discovered some efficiencies in loading semi-trucks. That was enough to erase billions of dollars from the share prices of several major logistics companies, none of which had karaoke experience.

Advertisement

After it did its job on Wall Street, the Citrini report came under considerable fire. Critics climbed over each other to proclaim its flimsiness. For one thing, they pointed out, AI has had very little discernable impact on the economy so far. Others cited the long history of resilience after technological upheavals. A mocking response by the respected trading firm Citadel Securities read, “For AI to produce a sustained negative demand shock, the economy must see a material acceleration in adoption, experience near-total labor substitution, no fiscal response, negligible investment absorption, and unconstrained scaling of compute.”

The most withering critiques disputed the report’s contention that much of the economy involves non-productive “rent-seeking” by middlemen and market makers, taking advantage of the laziness of the general population. When everyone has a few dozen AI agents working on their behalf, writes Shah, consumers will be able to effortlessly find the best goods for the best prices. Apps will be rendered unnecessary—just type what you want into the LLM and an army of agents will do everything for you. The “poster child” for this phenomenon, Shah says, is DoorDash. Instead of being limited to the restaurants on the app, consumers will send out AI agents to find their ideal meal options, contracting directly with restaurants and delivery people—no apps needed. Zero friction! The DoorDashes of the world are avocado toast!

Source link

Advertisement
Continue Reading

Tech

Honor teases its next-gen silicon-carbon battery that’s as thin as a playing card

Published

on

Honor is preparing to take battery innovation to the next level with its upcoming Silicon-Carbon Blade Battery. The company today shared a teaser offering a first look at the ultra-thin power pack, and it’s anything but conventional.

In the short clip, Honor showcases a battery as thin as a playing card, hurled through the air by a Guinness World Record-holding card thrower. The dramatic demonstration not only highlights the battery’s razor-thin profile but also its durability, as it slices through pieces of fruit mid-flight.

While Honor has yet to disclose detailed specifications, it claims the Silicon-Carbon Blade Battery features higher silicon content and greater capacity than the fifth-generation silicon-carbon pack set to debut in the upcoming Honor Magic V6. That battery, developed in partnership with China-based battery manufacturer ATL, reportedly features 25% silicon content and a sizeable 6,600mAh capacity without increasing the device’s thickness.

Advertisement

A big battery leap in a slim foldable

In contrast, last year’s Magic V5, which held the title of the thinnest foldable, packed a 5,820mAh silicon-carbon battery. The jump to 6,600mAh would mark a substantial year-on-year increase. It could also give the Magic V6 a clear edge over rival book-style foldables such as the Galaxy Z Fold 7, which features a 4,400mAh battery, and the Google Pixel 10 Pro Fold, which houses a 5,015mAh cell.

If Honor delivers a 6,600mAh battery in a chassis that remains ultra-thin, the Magic V6 could set a new benchmark for endurance in the foldable segment without compromising on design. The device is set to debut during the company’s MWC keynote on March 1, where it will also share more details about the Silicon-Carbon Blade Battery.

Source link

Advertisement
Continue Reading

Tech

The Huawei Watch GT Runner 2 has been created with a marathon legend

Published

on

Huawei just dropped a new wearable, the Watch GT Runner 2. This isn’t your average fitness tracker; it’s a professional-grade running smartwatch that was co-created with none other than marathon legend and two-time Olympic champion, Eliud Kipchoge.

It’s built to help you train smarter, whether you’re gunning for a marathon PR or just trying to finish your first 5K.

The collaboration brings together Huawei’s top-tier wearable tech with insights from world-class athletes, ensuring precision tracking, science-driven training, and, most importantly, all-day comfort.

This watch is loaded with innovations perfect for serious marathon prep. One of the coolest features is the 3D floating antenna architecture, which should offer ultra-precise GPS, so losing your signal in a tunnel or a heavily shaded trail is practically a thing of the past. Huawei has also integrated its lactate threshold detection algorithm and running power metric. 

Advertisement

Basically, you get detailed data on your training intensity and muscle strength, allowing you to fine-tune your workouts and nail that perfect race strategy. Plus, the industry-first Intelligent Marathon Mode offers dynamic pace guidance, smart refuel reminders, and real-time race support, all right there on your wrist.

Advertisement

Crafted from lightweight nanomolded titanium alloy, the Watch GT Runner 2 is Huawei’s lightest running watch yet, tipping the scales at a mere 43.5 grams. It looks great, too, coming in three sharp colorways: Dawn Orange, Dusk Blue, and Midnight Black. It includes a breathable AirDry woven strap and a bonus fluoroelastomer strap in the box.

watch runnerwatch runner

Oh, and for extra convenience, the watch debuts Curve Pay integration, meaning you can grab a post-run smoothie without fumbling for your wallet.

The Huawei Watch GT Runner 2 is available now for £349.99. But here’s the deal: there’s a launch promotion running until April 19, dropping the price to £319.99 and throwing in a free extra strap and partner benefits valued at over £109.

Advertisement

The Huawei Watch GT Runner 2 is a better running smartwatch than the GT Runner, offering great features and impressive tracking for less cash than the competition.


  • Comfortable to wear and two strap options

  • Useful new training and racing modes

  • Plenty of smartwatch features and other sports modes

  • User interface is the same as other Huawei Watches

  • Some tracking inaccuracies

  • App is full of bloatware

Source link

Advertisement
Continue Reading

Tech

AI in higher education and the ‘erosion’ of learning

Published

on

Prof Nir Eisikovits and Jacob Burley of the University of Massachusetts Boston discuss the ethics of AI in higher education and the technology’s role in ‘cognitive offloading’.

Click here to visit The Conversation.

A version of this article was originally published by The Conversation (CC BY-ND 4.0)

Public debate about artificial intelligence in higher education has largely orbited a familiar worry: cheating. Will students use chatbots to write essays? Can instructors tell? Should universities ban the tech? Embrace it?

These concerns are understandable. But focusing so much on cheating misses the larger transformation already underway, one that extends far beyond student misconduct and even the classroom.

Advertisement

Universities are adopting AI across many areas of institutional life. Some uses are largely invisible, like systems that help allocate resources, flag ‘at-risk’ students, optimise course scheduling or automate routine administrative decisions. Other uses are more noticeable. Students use AI tools to summarise and study, instructors use them to build assignments and syllabuses, and researchers use them to write code, scan literature and compress hours of tedious work into minutes.

People may use AI to cheat or skip out on work assignments. But the many uses of AI in higher education, and the changes they portend, beg a much deeper question: As machines become more capable of doing the labour of research and learning, what happens to higher education? What purpose does the university serve?

Over the past eight years, we’ve been studying the moral implications of pervasive engagement with AI as part of a joint research project between the Applied Ethics Center at UMass Boston and the Institute for Ethics and Emerging Technologies. In a recent white paper, we argue that as AI systems become more autonomous, the ethical stakes of AI use in higher ed rise, as do its potential consequences.

As these technologies become better at producing knowledge work – designing classes, writing papers, suggesting experiments and summarising difficult texts – they don’t just make universities more productive. They risk hollowing out the ecosystem of learning and mentorship upon which these institutions are built, and on which they depend.

Advertisement

Nonautonomous AI

Consider three kinds of AI systems and their respective impacts on university life.

AI-powered software is already being used throughout higher education in admissions review, purchasing, academic advising and institutional risk assessment. These are considered ‘nonautonomous’ systems because they automate tasks, but a person is ‘in the loop’ and using these systems as tools.

These technologies can pose a risk to students’ privacy and data security. They also can be biased. And they often lack sufficient transparency to determine the sources of these problems. Who has access to student data? How are ‘risk scores’ generated? How do we prevent systems from reproducing inequities or treating certain students as problems to be managed?

These questions are serious, but they are not conceptually new, at least within the field of computer science. Universities typically have compliance offices, institutional review boards and governance mechanisms that are designed to help address or mitigate these risks, even if they sometimes fall short of these objectives.

Advertisement

Hybrid AI

Hybrid systems encompass a range of tools, including AI-assisted tutoring chatbots, personalised feedback tools and automated writing support. They often rely on generative AI technologies, especially large language models. While human users set the overall goals, the intermediate steps the system takes to meet them are often not specified.

Hybrid systems are increasingly shaping day-to-day academic work. Students use them as writing companions, tutors, brainstorming partners and on-demand explainers. Faculty use them to generate rubrics, draft lectures and design syllabuses. Researchers use them to summarise papers, comment on drafts, design experiments and generate code.

This is where the ‘cheating’ conversation belongs. With students and faculty alike increasingly leaning on technology for help, it is reasonable to wonder what kinds of learning might get lost along the way. But hybrid systems also raise more complex ethical questions.

One has to do with transparency. AI chatbots offer natural-language interfaces that make it hard to tell when you’re interacting with a human and when you’re interacting with an automated agent. That can be alienating and distracting for those who interact with them. A student reviewing material for a test should be able to tell if they are talking with their teaching assistant or with a robot.

Advertisement

A student reading feedback on a term paper needs to know whether it was written by their instructor. Anything less than complete transparency in such cases will be alienating to everyone involved and will shift the focus of academic interactions from learning to the means or the technology of learning. University of Pittsburgh researchers have shown that these dynamics bring forth feelings of uncertainty, anxiety and distrust for students. These are problematic outcomes.

A second ethical question relates to accountability and intellectual credit. If an instructor uses AI to draft an assignment and a student uses AI to draft a response, who is doing the evaluating, and what exactly is being evaluated? If feedback is partly machine-generated, who is responsible when it misleads, discourages or embeds hidden assumptions? And when AI contributes substantially to research synthesis or writing, universities will need clearer norms around authorship and responsibility – not only for students, but also for faculty.

Finally, there is the critical question of cognitive offloading. AI can reduce drudgery, and that’s not inherently bad. But it can also shift users away from the parts of learning that build competence, such as generating ideas, struggling through confusion, revising a clumsy draft and learning to spot one’s own mistakes.

Autonomous agents

The most consequential changes may come with systems that look less like assistants and more like agents. While truly autonomous technologies remain aspirational, the dream of a researcher ‘in a box’ – an agentic AI system that can perform studies on its own – is becoming increasingly realistic.

Advertisement

Agentic tools are anticipated to ‘free up time’ for work that focuses on more human capacities like empathy and problem-solving. In teaching, this may mean that faculty may still teach in the headline sense, but more of the day-to-day labour of instruction can be handed off to systems optimised for efficiency and scale. Similarly, in research, the trajectory points toward systems that can increasingly automate the research cycle. In some domains, that already looks like robotic laboratories that run continuously, automate large portions of experimentation and even select new tests based on prior results.

At first glance, this may sound like a welcome boost to productivity. But universities are not information factories; they are systems of practice. They rely on a pipeline of graduate students and early-career academics who learn to teach and research by participating in that same work. If autonomous agents absorb more of the ‘routine’ responsibilities that historically served as on-ramps into academic life, the university may keep producing courses and publications while quietly thinning the opportunity structures that sustain expertise over time.

The same dynamic applies to undergraduates, albeit in a different register. When AI systems can supply explanations, drafts, solutions and study plans on demand, the temptation is to offload the most challenging parts of learning. To the industry that is pushing AI into universities, it may seem as if this type of work is ‘inefficient’ and that students will be better off letting a machine handle it. But it is the very nature of that struggle that builds durable understanding. Cognitive psychology has shown that students grow intellectually through doing the work of drafting, revising, failing, trying again, grappling with confusion and revising weak arguments. This is the work of learning how to learn.

Taken together, these developments suggest that the greatest risk posed by automation in higher education is not simply the replacement of particular tasks by machines, but the erosion of the broader ecosystem of practice that has long sustained teaching, research and learning.

Advertisement

An uncomfortable inflection point

So what purpose do universities serve in a world in which knowledge work is increasingly automated?

One possible answer treats the university primarily as an engine for producing credentials and knowledge. There, the core question is output: Are students graduating with degrees? Are papers and discoveries being generated? If autonomous systems can deliver those outputs more efficiently, then the institution has every reason to adopt them.

But another answer treats the university as something more than an output machine, acknowledging that the value of higher education lies partly in the ecosystem itself. This model assigns intrinsic value to the pipeline of opportunities through which novices become experts, the mentorship structures through which judgement and responsibility are cultivated, and the educational design that encourages productive struggle rather than optimising it away. Here, what matters is not only whether knowledge and degrees are produced, but how they are produced and what kinds of people, capacities and communities are formed in the process. In this version, the university is meant to serve as no less than an ecosystem that reliably forms human expertise and judgement.

In a world where knowledge work itself is increasingly automated, we think universities must ask what higher education owes its students, its early-career scholars and the society it serves. The answers will determine not only how AI is adopted, but also what the modern university becomes.

Advertisement

The Conversation

By Prof Nir Eisikovits and Jacob Burley

Nir Eisikovits is a professor of philosophy and founding director of the Applied Ethics Center at the University of Massachusetts Boston. Eisikovits’s research focuses on the ethics of war and the ethics of technology and he has written many books and articles on these topics.

Jacob Burley is a junior research fellow at the University of Massachusetts Boston, specialising in the ethics of emerging technologies. His work explores how artificial intelligence reshapes human decision-making, responsibility and knowledge practices, with particular attention to the normative and epistemic challenges posed by increasingly autonomous systems.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025