Connect with us

Tech

AI in higher education and the ‘erosion’ of learning

Published

on

Prof Nir Eisikovits and Jacob Burley of the University of Massachusetts Boston discuss the ethics of AI in higher education and the technology’s role in ‘cognitive offloading’.

Click here to visit The Conversation.

A version of this article was originally published by The Conversation (CC BY-ND 4.0)

Public debate about artificial intelligence in higher education has largely orbited a familiar worry: cheating. Will students use chatbots to write essays? Can instructors tell? Should universities ban the tech? Embrace it?

These concerns are understandable. But focusing so much on cheating misses the larger transformation already underway, one that extends far beyond student misconduct and even the classroom.

Advertisement

Universities are adopting AI across many areas of institutional life. Some uses are largely invisible, like systems that help allocate resources, flag ‘at-risk’ students, optimise course scheduling or automate routine administrative decisions. Other uses are more noticeable. Students use AI tools to summarise and study, instructors use them to build assignments and syllabuses, and researchers use them to write code, scan literature and compress hours of tedious work into minutes.

People may use AI to cheat or skip out on work assignments. But the many uses of AI in higher education, and the changes they portend, beg a much deeper question: As machines become more capable of doing the labour of research and learning, what happens to higher education? What purpose does the university serve?

Over the past eight years, we’ve been studying the moral implications of pervasive engagement with AI as part of a joint research project between the Applied Ethics Center at UMass Boston and the Institute for Ethics and Emerging Technologies. In a recent white paper, we argue that as AI systems become more autonomous, the ethical stakes of AI use in higher ed rise, as do its potential consequences.

As these technologies become better at producing knowledge work – designing classes, writing papers, suggesting experiments and summarising difficult texts – they don’t just make universities more productive. They risk hollowing out the ecosystem of learning and mentorship upon which these institutions are built, and on which they depend.

Advertisement

Nonautonomous AI

Consider three kinds of AI systems and their respective impacts on university life.

AI-powered software is already being used throughout higher education in admissions review, purchasing, academic advising and institutional risk assessment. These are considered ‘nonautonomous’ systems because they automate tasks, but a person is ‘in the loop’ and using these systems as tools.

These technologies can pose a risk to students’ privacy and data security. They also can be biased. And they often lack sufficient transparency to determine the sources of these problems. Who has access to student data? How are ‘risk scores’ generated? How do we prevent systems from reproducing inequities or treating certain students as problems to be managed?

These questions are serious, but they are not conceptually new, at least within the field of computer science. Universities typically have compliance offices, institutional review boards and governance mechanisms that are designed to help address or mitigate these risks, even if they sometimes fall short of these objectives.

Advertisement

Hybrid AI

Hybrid systems encompass a range of tools, including AI-assisted tutoring chatbots, personalised feedback tools and automated writing support. They often rely on generative AI technologies, especially large language models. While human users set the overall goals, the intermediate steps the system takes to meet them are often not specified.

Hybrid systems are increasingly shaping day-to-day academic work. Students use them as writing companions, tutors, brainstorming partners and on-demand explainers. Faculty use them to generate rubrics, draft lectures and design syllabuses. Researchers use them to summarise papers, comment on drafts, design experiments and generate code.

This is where the ‘cheating’ conversation belongs. With students and faculty alike increasingly leaning on technology for help, it is reasonable to wonder what kinds of learning might get lost along the way. But hybrid systems also raise more complex ethical questions.

One has to do with transparency. AI chatbots offer natural-language interfaces that make it hard to tell when you’re interacting with a human and when you’re interacting with an automated agent. That can be alienating and distracting for those who interact with them. A student reviewing material for a test should be able to tell if they are talking with their teaching assistant or with a robot.

Advertisement

A student reading feedback on a term paper needs to know whether it was written by their instructor. Anything less than complete transparency in such cases will be alienating to everyone involved and will shift the focus of academic interactions from learning to the means or the technology of learning. University of Pittsburgh researchers have shown that these dynamics bring forth feelings of uncertainty, anxiety and distrust for students. These are problematic outcomes.

A second ethical question relates to accountability and intellectual credit. If an instructor uses AI to draft an assignment and a student uses AI to draft a response, who is doing the evaluating, and what exactly is being evaluated? If feedback is partly machine-generated, who is responsible when it misleads, discourages or embeds hidden assumptions? And when AI contributes substantially to research synthesis or writing, universities will need clearer norms around authorship and responsibility – not only for students, but also for faculty.

Finally, there is the critical question of cognitive offloading. AI can reduce drudgery, and that’s not inherently bad. But it can also shift users away from the parts of learning that build competence, such as generating ideas, struggling through confusion, revising a clumsy draft and learning to spot one’s own mistakes.

Autonomous agents

The most consequential changes may come with systems that look less like assistants and more like agents. While truly autonomous technologies remain aspirational, the dream of a researcher ‘in a box’ – an agentic AI system that can perform studies on its own – is becoming increasingly realistic.

Advertisement

Agentic tools are anticipated to ‘free up time’ for work that focuses on more human capacities like empathy and problem-solving. In teaching, this may mean that faculty may still teach in the headline sense, but more of the day-to-day labour of instruction can be handed off to systems optimised for efficiency and scale. Similarly, in research, the trajectory points toward systems that can increasingly automate the research cycle. In some domains, that already looks like robotic laboratories that run continuously, automate large portions of experimentation and even select new tests based on prior results.

At first glance, this may sound like a welcome boost to productivity. But universities are not information factories; they are systems of practice. They rely on a pipeline of graduate students and early-career academics who learn to teach and research by participating in that same work. If autonomous agents absorb more of the ‘routine’ responsibilities that historically served as on-ramps into academic life, the university may keep producing courses and publications while quietly thinning the opportunity structures that sustain expertise over time.

The same dynamic applies to undergraduates, albeit in a different register. When AI systems can supply explanations, drafts, solutions and study plans on demand, the temptation is to offload the most challenging parts of learning. To the industry that is pushing AI into universities, it may seem as if this type of work is ‘inefficient’ and that students will be better off letting a machine handle it. But it is the very nature of that struggle that builds durable understanding. Cognitive psychology has shown that students grow intellectually through doing the work of drafting, revising, failing, trying again, grappling with confusion and revising weak arguments. This is the work of learning how to learn.

Taken together, these developments suggest that the greatest risk posed by automation in higher education is not simply the replacement of particular tasks by machines, but the erosion of the broader ecosystem of practice that has long sustained teaching, research and learning.

Advertisement

An uncomfortable inflection point

So what purpose do universities serve in a world in which knowledge work is increasingly automated?

One possible answer treats the university primarily as an engine for producing credentials and knowledge. There, the core question is output: Are students graduating with degrees? Are papers and discoveries being generated? If autonomous systems can deliver those outputs more efficiently, then the institution has every reason to adopt them.

But another answer treats the university as something more than an output machine, acknowledging that the value of higher education lies partly in the ecosystem itself. This model assigns intrinsic value to the pipeline of opportunities through which novices become experts, the mentorship structures through which judgement and responsibility are cultivated, and the educational design that encourages productive struggle rather than optimising it away. Here, what matters is not only whether knowledge and degrees are produced, but how they are produced and what kinds of people, capacities and communities are formed in the process. In this version, the university is meant to serve as no less than an ecosystem that reliably forms human expertise and judgement.

In a world where knowledge work itself is increasingly automated, we think universities must ask what higher education owes its students, its early-career scholars and the society it serves. The answers will determine not only how AI is adopted, but also what the modern university becomes.

Advertisement

The Conversation

By Prof Nir Eisikovits and Jacob Burley

Nir Eisikovits is a professor of philosophy and founding director of the Applied Ethics Center at the University of Massachusetts Boston. Eisikovits’s research focuses on the ethics of war and the ethics of technology and he has written many books and articles on these topics.

Jacob Burley is a junior research fellow at the University of Massachusetts Boston, specialising in the ethics of emerging technologies. His work explores how artificial intelligence reshapes human decision-making, responsibility and knowledge practices, with particular attention to the normative and epistemic challenges posed by increasingly autonomous systems.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Broadcom bets on 2nm stacked silicon to rival Nvidia in AI

Published

on


The technology is based on a vertically integrated design that bonds two chips into a single stack. By tightly coupling these silicon layers, Broadcom’s engineers aim to increase data transfer speeds while reducing energy consumption – a critical advantage as AI workloads become more computationally intensive.
Read Entire Article
Source link

Continue Reading

Tech

Smart TV apps are quietly scraping web data for AI training

Published

on


Bright Data operates a global proxy network designed to collect publicly available web content, and customers are voluntarily joining the network so that they can spare a few dollars on their TV viewing experience. According to a recent report, code associated with Bright Data has appeared in certain smart TV…
Read Entire Article
Source link

Continue Reading

Tech

The global RAM and SSD shortage crisis, explained

Published

on

A global shortage is responsible for every electronics and computer manufacturer in the world — including Apple — paying twice as much for RAM and flash storage as it did in 2025, and 10 times more than it paid in 2020. Here’s why there is little hope of that improving anytime soon.

Two small SK hynix memory chips resting on a colorful, grid-patterned silicon wafer background with vertical rows in gradients of red, orange, yellow, green, and blue
Memory is in short supply globally — Image credit: SK Hynix

Apple has historically been able to closely control the cost of its components. Buying in huge numbers, from multiple suppliers has historically given an economy of scale that made Apple a sought-after customer for everything from display makers to storage vendors.
But that dynamic has changed. A global shortage of key components like memory and storage has seen the price of both skyrocket. Apple is far from the only company impacted.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Galaxy S26 vs. iPhone 17: Which entry-level flagship is right for you?

Published

on

For 2026, the comparison between baseline iPhone and Android flagships comes down to two phones that are closer than they’ve ever been — the Galaxy S26 at $899 and the iPhone 17 at $799. Same form factor, same screen size, very different philosophies.

We’ve broken down everything that actually moves the needle — design, display, performance, cameras, battery, and software — because the right phone isn’t the one with the longer spec sheet. It’s the one that fits how you actually use it.

Price and availability

The iPhone 17 kicks off at $799 with 256GB baked in from the start — no arguing with that. The Galaxy S26 lands at $899 for 256GB. Last year’s S25 was $859, so Samsung snuck in a $40 increase, and the ongoing memory shortage got the blame.

So there’s a $100 gap sitting between these two phones right out the gate. Whether the S26 justifies it over the iPhone 17 — or whether Apple’s just quietly winning on value before the comparison even starts — is what the rest of this piece is for.

Advertisement

Design

Pick up the S26 and the iPhone 17 back-to-back and the first thing you think is: did these two companies share a blueprint? Heights are dead-even at 149.6mm. Width differs by 0.2mm — which doesn’t make a different in real life.

Apple’s phone is thicker at 7.95 mm versus Samsung’s 7.2 mm, and heavier too, tipping the scales at 177 grams against the S26’s 167 grams. What gives away Samsung’s entry-level flagship is its boxy corners, which are immediately recognizable against the rounded corners on the iPhone 17.

Both phones use aluminum frames, so nobody’s winning a materials fight there. The glass is where they split — Gorilla Glass Victus 2 front and back on the S26, and Apple’s Ceramic Shield 2 on the iPhone 17’s front, which Apple says scratches three times less easily than regular glass.

Dunking either one is fine either way; IP68 on both. The S26 comes in Black, Cobalt Violet, Sky Blue, and White — pick one and people will notice. The iPhone 17 gives you Black, White (my personal favorite), Mist Blue, Sage, and Lavender — tones quiet enough that your phone practically whispers.

Display

Both screens measure 6.3 inches, so that argument ends before it starts. Where things get interesting is everything underneath that number.

The iPhone 17 sports a 2622 x 1206 pixel OLED panel at 460 ppi, sharper than the Galaxy S26’s panel, which maxes out at FHD+ with 2340 x 1080 pixels (411 ppi). The S26’s display is fine, looks good, and frankly most people won’t lose sleep over it. Side-by-side though, the difference shows (I hope Samsung sees it as well).

The S26 peaks at 2,600 nits outdoors, which handles most sunny days well enough. The iPhone 17 pushes to 3,000 nits — and upon using it side by side with the Galaxy S25 (which shares its peak brightness with the S26), I found the iPhone to be noticeably brighter, especially under direct sunlight.

Advertisement

Both do 1-120Hz adaptive refresh rates, so scrolling feels equally fluid on either one. Then there’s always-on display — both phones keep your notifications visible without fully waking the screen, which sounds minor until you’ve used it for a week and then picked up a phone without it.

While I’ve grown accustomed to the Dynamic Island on the iPhone 17, you might not like it in the first glance, especially if you’re upgrading from an Android phone with a punch-hole camera — that’s something to keep in mind as well.

Performance

Specs-wise, Samsung shows up with more — Snapdragon 8 Elite Gen 5, 3nm, 12GB RAM. Apple brings the A19 and 8GB. On a spec sheet that reads as a clean Samsung win, but phones aren’t spec sheets.

Benchmarks tell a messier story. The S26 pulls ahead when multiple cores are working together, which is relevant for heavy multitasking. The scores are almost similar in the single-core test, which is what your phone actually leans on for most things — launching apps, typing, switching between tasks. All-in-all, both phones offer similar (read excellent) day-to-day performance.

The RAM gap is where it gets more practical. Twelve gigabytes means more apps stay open in the background without reloading. If your phone use involves juggling a lot at once, the S26 has more headroom. And yes, both are perfectly capable of handing the most demanding games at high frame rates, it’s just the matter of whether the developer has included support for it or not.

I’ve been using the iPhone 17 for about six months now, and I haven’t, for once, felt that the phone doesn’t offer enough CPU or GPU performance, especially when needed. That’s the thing with top-tier mobile chipsets; they’ve got more horsepower than most people can use upfront, but it helps maintaining the performance in the long-term.

Operating System

The S26 runs One UI 8.5 on Android 16 — the most put-together version of Samsung’s skin yet. Rounder, cleaner, and stuffed with settings you’ll spend a Sunday afternoon exploring.

Galaxy AI actually pulls weight now: Now Nudge suggests replies by reading your screen context, Call Screening stops unknown callers before your phone buzzes, and Audio Eraser finally works inside YouTube and Instagram, not just Samsung’s own apps. Bixby gets Perplexity as backup for the questions it used to fumble.

Advertisement

iOS 26 got a full face-lift with Liquid Glass — translucent menus and icons that split opinion pretty cleanly between “stunning” and “bit much.” Apple Intelligence handles real-time translation across calls, Messages, and FaceTime, though it’s not as useful as Galaxy AI. The ecosystem perks, however, are still superior.

Samsung commits to seven years of operating system and security updates, while Apple usually provides around five to six years of software support.

Cameras

The S26 has a 50MP main, 12MP ultrawide, and a dedicated 10MP 3x telephoto. The iPhone 17 runs a 48MP main at f/1.6, a 48MP ultrawide, and a 2x “zoom” that’s just the main sensor being cropped — not a real telephoto lens.

Daylight shots on both look great, full stop. Where they differ is taste. Samsung cranks up the saturation and contrast — your photos come out looking like they’ve already been edited, ready to post. Apple mostly shows you what was there, i.e., the camera reproduces natural, neutral colors.

After dark, the iPhone quietly holds its own. Apple’s Night Mode has been one of the best in the business for years (along with the f/1.6 aperture). Zoom goes the other way. A real 3x optical lens on the S26 versus Apple’s cropped 2x is a clear hardware win for Samsung.

The most unique thing about the iPhone 17’s camera system is its selfie shooter — an 18MP (f/1.9) square-shaped camera sensor that can capture super wide selfies in multiple aspect ratios. Apple surely needs to bump up the resolution for the visual area the sensor covers, but even so, Samsung’s 12MP sensor is no match for it.

Video on both is strong at 4K/60fps with good stabilization. Apple’s color science gives it a slight edge in footage quality, plus the sensor-shift stabilization works like a charm, but the S26 shoots 8K if that’s something you need. Most people don’t, but the option exists.

Advertisement

Battery

The S26 has a bigger tank — 4,300mAh versus the iPhone 17’s 3,692mAh — and Samsung claims 31 hours of video playback to Apple’s 30. One hour in it, with a notably smaller cell on Apple’s side. That gap says more about the A19’s efficiency than it does about the S26’s battery.

Charging is where iPhone pulls ahead. With 40W wired charging, the handset reaches 50% in roughly 25 minutes. The S26 still sits at 25W — same as its last two predecessors. Wireless is where the gap reopens. The iPhone 17 does 25W via MagSafe; the S26 base model caps at 15W standard wireless.

Conclusion

The S26 makes a stronger case on paper. More RAM, a bigger battery, a real telephoto lens, 8K video, and One UI 8.5 giving you enough customization to keep a hobbyist busy for weeks. It’s the better phone for power users, Android loyalists, and anyone who shoots a lot of zoom photos or wants their phone to last the full day.

The iPhone 17 wins on the things that are harder to put in a spec sheet. Faster charging, better low-light photography, smoother sustained performance under load, the refreshing iOS 26 experience, and an ecosystem so tightly integrated it borders on a lifestyle choice. If you own a Mac, iPad, or AirPods, the iPhone 17 doesn’t just work well — it works together in a way the S26 can’t replicate.

Advertisement

Source link

Continue Reading

Tech

I’m thrilled by Wednesday’s star-studded third year, here’s everything we know about season 3

Published

on

Netflix’s mystery series Wednesday reinvigorated the Addams Family for the modern age, becoming one of the streaming giant’s most-watched shows. It’s only natural that Netflix keep the hype running with Wednesday‘s upcoming third season.

There’s a lot we can expect to see from Wednesday after season 2. It’s unclear what Wednesday and her peers will encounter in the next season, but what makes the show so fun is watching the mystery unfold. In all fairness, we don’t like waiting years for more episodes. Don’t fret. We’ve got you covered with everything you need to know about Wednesday season 3.

What’s the story of Wednesday season 3?

Netflix Tudum wrote that in season 3, “a new wave of insidious interlopers will be darkening the doors of Nevermore Academy.” Wednesday showrunners Al Gough and Miles Millar said to Tudum that the third season will also “excavate some long-rotting Addams Family secrets.”

“Our goal for Season 3 is the same as it is for every season: to make it the best season of Wednesday we possibly can,” said Gough. “We want to continue digging deeper into our characters while expanding the world of Nevermore and Wednesday.”

These statements fit with what we saw as Wednesday left Nevermore with Uncle Fester and Thing in search of her alpha werewolf bestie, Enid. In this ending to season 2, Wednesday had a vision of her Aunt Ophelia, imprisoned by Grandmama Frump and writing in blood, “Wednesday Must Die,” suggesting some of the Addams family’s skeletons will come out of the closet.

Advertisement

When will Wednesday season 3 come out?

Since Wednesday season 3 is so early into production, there is no release date set at this time. We can’t see into the future like Wednesday Addams, but it is likely she will return in 2027. Though the second season premiered three years after the first, the 2023 writers’ and actors’ strike stalled production for several months. Barring any future delays, the wait for season 3 should last a total of two years rather than three.

When and where is Wednesday season 3 filming?

Production for Wednesday season 3 started in February 2026, according to Netflix Tudum. Like with season 2, filming will take place near Dublin.

Who will return in Wednesday season 3?

As usual, Wednesday will feature a vast, quirky cast of characters in season 3, including members of the Addams Family and Wednesday’s classmates at Nevermore Academy.

  • Jenna Ortega as Wednesday Addams
  • Catherine Zeta-Jones as Morticia Addams
  • Luis Guzman as Gomez Addams
  • Isaac Ordonez as Pugsley Addams
  • Joana Lumley as Grandmama Hester Frump
  • Joy Sunday as Bianca Barclay
  • Georgie Farmer as Ajax Tanaka
  • Moosa Mostafa as Eugene Ottinger
  • Evie Templeton as Agnes DeMille
  • Victor Dorobantu as Thing
  • Winona Ryder as Tabitha
  • Emma Myers as Enid Sinclair
  • Hunter Doohan as Tyler Galpin
  • Fred Armisen as Uncle Fester
  • Billie Piper as Isadora Capri
  • Luyanda Unati Lewis-Nyawo as Santiago
  • Oscar Morgan as Atticus
  • Kennedy Moyer as Daisy
  • Noah Taylor as Cyrus
  • Chris Sarandon as Balthazar
  • Eva Green as Ophelia Frump

Who’s new to Wednesday season 3?

Just like season 2, Wednesday‘s third season will welcome plenty of new characters to Nevermore Academy. Actors joining the cast next season include Winona Ryder (Stranger Things), Chris Sarandon (Dog Day Afternoon, The Princess Bride), Noah Taylor (Peaky Blinders, Game of Thrones), Oscar Morgan (A Knight of the Seven Kingdoms), and Kennedy Moyer (Task, Roofman).

In an interview with Netflix Tudum, Gough and Millar shared a statement praising Eva Green and her performance as Wednesday’s Aunt Ophelia:

“Eva Green has always brought an exhilarating, singular presence to the screen — elegant, haunting, and beautifully unpredictable. Those qualities make her the perfect choice for Aunt Ophelia. We’re excited to see how she transforms the role and expands Wednesday’s world.”

Green also said to Tudum, “I’m thrilled to join the woefully twisted world of Wednesday as Aunt Ophelia. This show is such a deliciously dark and witty world, I can’t wait to bring my own touch of cuckoo-ness to the Addams family.”

Winona Ryder’s casting is also particularly noteworthy. The actor has frequently starred as a main player in producer Tim Burton’s films. Most recently, she starred alongside Jenna Ortega in Beetlejuice Beetlejuice. Whether or not Ryder’s new character will support Wednesday on her journey, it will be exciting to see the former reignite her on-screen chemistry with Ortega.

Advertisement

Are there any trailers for Wednesday season 3?

On February 23, Netflix shared a fiendishly flamboyant video announcing that production for Wednesday season 3, all while revealing the cast. The trailer also featured a “?” to label one of the season’s cast members, suggesting this mystery character plays an important role that would spoil the story.

Source link

Advertisement
Continue Reading

Tech

Google and OpenAI employees sign open letter in ‘solidarity’ with Anthropic

Published

on

Hundreds of employees at Google and OpenAI have urging their companies to in its standoff with the Pentagon over military applications for AI tools like Claude.

The letter, titled “We Will Not Be Divided,” calls on the leadership of both companies to “put aside their differences and stand together to continue to refuse the Department of War’s current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.” These are two lines that Anthropic CEO Dario Amodei should not be crossed by his or any other AI company.

As of publication, the letter has over 450 signatures, almost 400 of which come from Google employees and the rest from OpenAI. Currently, roughly 50 percent of all participants have chosen to attach their names to the cause, with the rest remaining anonymous. All are verified as current employees of these companies. The original organizers of the letter aren’t Google or OpenAI employees; they say are unaffiliated with any AI company, political party or advocacy group.

The open letter is the latest development in the saga between Anthropic and US Defense Secretary Pete Hegseth, who to label the company a “supply chain risk” if it did not agree to withdraw certain guardrails for classified work. The Pentagon has also been in talks with Google and OpenAI about using their models for classified work, with earlier this week. The letter argues the government is “trying to divide each company with fear that the other will give in.”

Advertisement

OpenAI CEO Sam Altman told his employees on Friday that the ChatGPT maker will draw the same red lines as Anthropic, according to an internal memo seen by . He told on the same day that he doesn’t “personally think the Pentagon should be threatening DPA against these companies.”

Source link

Continue Reading

Tech

Smartphone Market To Decline 13% in 2026, Marking the Largest Drop Ever Due To the Memory Shortage Crisis

Published

on

An anonymous reader shares a report: Worldwide smartphone shipments are forecast to decline 12.9% year-on-year (YoY) in 2026 to 1.1 billion units, according to the International Data Corporation (IDC) Worldwide Quarterly Mobile Phone Tracker. This decline will bring the smartphone market to its lowest annual shipment volume in more than a decade. The current forecast represents a sharp decline from our November forecast amid the intensifying memory shortage crisis.

Source link

Continue Reading

Tech

Global smartphone shipments expected to fall 13% amid memory supply crunch

Published

on


According to a new report from market research firm International Data Corporation, global smartphone shipments are expected to total around 1.1 billion units this year, down from 1.26 billion in 2025. This marks a significant downward revision from the company’s November 2025 forecast, which projected a decline of between 0.9…
Read Entire Article
Source link

Continue Reading

Tech

Perplexity launches Computer, wants AI to run tasks for months, not minutes

Published

on


Rather than relying on a single model, Perplexity AI’s Computer system functions as an orchestrator across multiple models. Anthropic’s Claude Opus 4.6 serves as the primary reasoning engine, while Gemini handles deep research tasks. Nano Banana generates images, Veo 3.1 produces video, Grok executes lightweight, speed-optimized tasks, and OpenAI’s ChatGPT…
Read Entire Article
Source link

Continue Reading

Tech

Loewe’s Vega TVs give you slick design in smaller sizes

Published

on

Loewe has announced the Vega, a new range of compact 4K Ultra HD smart TVs available in 32 and 43-inch sizes.

The Vega sits below Loewe’s flagship Stellar OLED line, which spans 42 to 97 inches and starts at £1,699, but uses VA LCD panels with full-array Direct LED backlighting rather than OLED, a technology choice that allows Loewe to hit higher peak brightness figures across a smaller and more affordable chassis.

The 43-inch model carries 390 LED dimming zones and reaches a peak luminance of 880 cd/m², while the 32-inch version uses 260 dimming zones and reaches 550 cd/m², both figures sitting above what most competing LCD televisions at this screen size typically deliver to living rooms in bright daylight conditions.

Both models support the full range of HDR formats, including Dolby Vision IQ, HDR10, and HLG, with the Vega marking the first time Loewe has offered a 4K Ultra HD panel in a 32-inch format, a size that most manufacturers continue to supply only in Full HD resolution.

Advertisement

The integrated soundbar delivers 60 watts of Class-D amplification developed and tuned by Loewe’s in-house audio team, supporting Dolby Atmos and connecting to external sound systems through HDMI eARC, a configuration that competes more directly with premium soundbar bundles than with the basic speakers typically built into televisions at this screen size.

Smart features and connectivity

Loewe’s os9 smart platform, built on the VIDAA operating system, handles streaming access across Netflix, YouTube, Disney+, and Apple TV, with Apple AirPlay, Miracast, DLNA, and Matter connectivity expanding the Vega’s integration with both Apple and broader smart home ecosystems.

Advertisement

The 43-inch model carries two HDMI 2.1 ports supporting 4K at up to 120Hz alongside VRR and ALLM for low-latency gaming, while the 32-inch version supports 4K at up to 60Hz through its HDMI 2.1 ports, with both models also offering cloud gaming access through Blacknut and Boosteroid via the VIDAA platform.

Advertisement

A brushed aluminium frame, rotatable metal table stand with chrome finish, and integrated cable management with magnetic rear covers reflect the same design discipline Loewe applies across its higher-end OLED TV range, placing the Vega closer in aesthetic approach to Bang and Olufsen than to mass-market LCD televisions at comparable screen sizes.

The Loewe Vega 32-inch is priced at £1650 and the 43-inch at £1900, with both models available through selected Loewe retail partners from March 2026.

For a closer look at how the Vega’s LCD panel compares against the best screens on the market, our guide to the best OLED TVs rounds up the top picks from every major brand.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025