Connect with us

Tech

Apple iPhone 17 vs iPhone 16: Should you upgrade?

Published

on

Need a new iPhone but aren’t sure whether to opt for the latest iPhone 17, or to save a bit of money and get 2024’s iPhone 16? You’ve come to the right place.

While not all of us necessarily need the latest flagship smartphone, and opting for an older one is a great way to save money, many worry that there could be too much of a sacrifice. After all, smartphones are ingrained in our everyday lives so they need to be reliable.

With this in mind, we’ve compared our reviews of the iPhone 17 to the iPhone 16 so you can decide which handset to go for.

Otherwise, make sure you visit our list of the best smartphones and, if you aren’t yet sold on an iPhone, our best Android phones will offer our favourite alternatives.

Advertisement

Price and Availability

The iPhone 17 has a starting RRP of £799/$799, which is unsurprisingly more expensive than its younger sibling. However, it’s worth noting that this price is for the 256GB-sized handset.

Advertisement

SQUIRREL_PLAYLIST_10207955

In comparison, while the iPhone 16 starts at a cheaper £699/$699, this is for a much smaller 128GB-sized handset. In fact, if you want to upgrade to 256GB, then its RRP rises to more than the iPhone 17, at £899/$899. 

Advertisement

Design

  • Both the iPhone 17 and iPhone 16 share the same design
  • iPhone 17 is fitted with Ceramic Shield 2
  • Both include the Action and Camera Control buttons

Other than their colour selection, and the iPhone 17 being slightly bigger, there isn’t much difference between the two iPhone’s designs. Both sport the same flat edged, rounded corner design that was first introduced with the iPhone 12 – and this certainly isn’t a bad thing. Even so, there are a few tweaks with the iPhone 17 that although might not be visible, help make the handset feel more premium.

Firstly, the iPhone 17 sports Apple’s Ceramic Shield 2 protection on both the front and back, whereas the iPhone 16 is fitted with the older Ceramic Shield. Apple claims that Ceramic Shield 2 is more durable than its predecessor and should prevent micro-scratches from forming. Admittedly, we didn’t put the iPhone 17 through particularly wild tests to determine whether this is true, we still found that the panels remained scratch-free after prolonged use. 

Advertisement

Otherwise, both the iPhone 17 and 16 have an IP68-rating and include the reprogrammable Action and Camera Control button.

Winner: iPhone 17

Advertisement

Screen

  • iPhone 17 benefits from a 120Hz refresh rate while the iPhone 16 maxes out at 60Hz
  • The iPhone 17’s screen is slightly bigger at 6.3-inches
  • Both are OLED displays

Apple has finally taken the lead from the best Android phones (and even the majority of the best mid-range phones too) and introduced a 120Hz refresh rate to the iPhone 17. Coined ProMotion, the LTPO-enabled technology was previously reserved for its Pro models which was a huge bugbear for many. Instead, the iPhone 16 sports just a 60Hz refresh rate.

Using an iPhone 17Using an iPhone 17
iPhone 17. Image Credit (Trusted Reviews)

Advertisement

As expected, the inclusion of ProMotion makes the iPhone 17 feel impressively smooth in both everyday use and when gaming too, especially in comparison to the iPhone 16. In fact, we hailed the iPhone 17 as having “the best screen yet on an entry-level iPhone”. 

Otherwise, the iPhone 17 is actually slightly bigger than the iPhone 16, at 6.3-inches compared to 6.1-inches. Even so, both panels are OLED and support HDR10 and Dolby Vision content.

Winner: iPhone 17

Camera

  • Neither handset has a dedicated zoom lens but include a 2x in-sensor zoom instead
  • Both have main and ultrawide rear lenses, but the iPhone 17’s are both 48MP
  • The iPhone 17 has an upgraded 18MP square selfie camera

Apple made many thoughtful improvements with the iPhone 17’s camera hardware. While we’d still recommend opting for the iPhone 17 Pro if you’re serious about photography, the iPhone 17 is a brilliant choice for most casual snappers.

While both the iPhone 16 and iPhone 17 are equipped with a 48MP main lens which deliver consistently sharp and detailed shots, the iPhone 17 benefits from a 48MP ultrawide whereas the iPhone 16’s is just 12MP. The difference, perhaps unsurprisingly, is enormous as we found the iPhone 17 delivers a big jump in overall resolution and better low-light shots too.

Advertisement

Advertisement

Image captured on iPhone 17Image captured on iPhone 17
Captured on iPhone 17. Image Credit (Trusted Reviews)

One area which lets both the iPhone 17 and iPhone 16 down is the lack of dedicated zoom lens, like their Pro alternatives. Even so, both handsets are fitted with an in-sensor 2x zoom instead, which allows you to get closer without sacrificing quality and detail too. 

While the iPhone 16’s 12MP front lens is undoubtedly decent, the iPhone 17 boasts a welcome upgrade. Not only is the front camera 18MP but it’s now a square sensor which allows you to shoot portrait and landscape shots without actually having to rotate your phone. It may sound small, but it’s a seriously brilliant tweak.

Winner: iPhone 17

Performance

  • A19 vs A18 chips
  • The iPhone 17’s 120Hz refresh rate makes gaming and scrolling feel smoother
  • Apple has ditched the original 128GB storage option for 256GB with the iPhone 17

Although neither the iPhone 17 nor iPhone 16 are quite as powerful as their respective Pro siblings, both offer brilliant performance that’s enough for most users. In fact, unless you’re playing high-res AAA titles or editing multiple 4K video streams in LumaFusion, you’re unlikely to notice a difference.

Advertisement

Advertisement

Powering the iPhone 17 is Apple’s A19 chip which, when paired with the 120Hz refresh, ensures apps open instantly, scrolling feels smooth and you can comfortably achieve high frame rates in games too. 

iPhone 16 screeniPhone 16 screen
iPhone 16. Image Credit (Trusted Reviews)

Instead, the iPhone 16 runs on Apple’s A18 chip and remains a capable smartphone – even over a year on. In fact, we found in our benchmarking tests that it doesn’t come that far behind the iPhone 16 Pro Max. The biggest nuisance with the iPhone 16 is that it caps out at a 60Hz refresh rate. Even so, if you’re coming from an even older phone, you’re unlikely to notice this too much. 

Winner: iPhone 17

Software

  • Both support iOS 26
  • New Liquid Glass interface is easy to use and, we think, looks great
  • Apple Intelligence remains an afterthought

When the iPhone 16 launched back in 2024, arguably one of the reasons to buy the phone was the promise of the vast Apple Intelligence toolkit. Unfortunately, nearly two years on, Apple Intelligence still hasn’t quite come into its own.

Advertisement

Siri on iPhone 17Siri on iPhone 17
iPhone 17 Siri. Image Credit (Trusted Reviews)

Sure, Writing Tools is somewhat useful and Image Playground is fun for a while, but generally the AI toolkit fails to impress – especially when Gemini really does help to enhance the best Android phones. Essentially, with both the iPhone 17 and iPhone 16, we wouldn’t recommend buying either purely for Apple Intelligence. 

Otherwise, both the iPhones support iOS 26. Overall we don’t have many qualms with iOS 26 and find the software is polished, easy-to-use and feels familiar, even with the new Liquid Glass design. 

Advertisement

Winner: Tie

Battery

  • Both offer all-day battery life
  • iPhone 17 benefits from faster 40W wired charging
  • Both support a max 25W wireless charging

Advertisement

Apple has never boasted a strong reputation for battery life, especially when compared to many of the best Android phones which sport seriously mighty cells. Even so, we found that both the iPhone 17 and iPhone 16 are solid all-day handsets, as we easily ended days with some charge remaining.

Plus, if you want to top up during the day then it’s good to know both support wireless charging too.

However, the iPhone 17 benefits from faster 40W wired charging, which we found took around 85 minutes to reach 100%. In comparison, the iPhone 16 supports slightly slower speeds of 30W which took around 100 minutes to fully recharge.

Advertisement

Winner: iPhone 17

Verdict

With a 120Hz refresh rate, powerful processor and improved camera camera hardware, the iPhone 17 is an easy recommendation for many – especially if you’re coming from an older iPhone. 

Having said that, if you aren’t too fussed about having the absolute latest technologies and want to get a new-ish iPhone but without the high price tag, then the iPhone 16 remains a solid choice.

Advertisement

Advertisement

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Why I Can’t Pretend Teacher Learning Doesn’t Matter Anymore

Published

on

This story was published by a Voices of Change fellow. Learn more about the fellowship here.

I’ve attended my share of professional development sessions as an educator. Too often, I’ve walked away asking the same question: Is this really how we expect teachers to learn?

I still remember one session on trauma-informed teaching held in a school cafeteria. The tables and attached seats were too small for most of us, while the lights hummed overhead. For two and a half hours, the facilitator read from an endless slide deck about the importance of connection and empathy. There was minimal context building, limited discussion and no reflection. By the end, the facilitator smiled and said, “Now you are all trauma-informed teachers!” I think my eyes rolled so far back they almost stayed there.

Sitting and listening to someone talk for 45 to 60 minutes is not learning, let alone two and a half hours. My body knows it before my brain does. I get restless, my mind drifts, I check the time and take a walk to refill my water bottle. In that first hour, disappointment sets in fast.
Minimal conditions for adult learning have become the norm. I used to resent that; now I fear it. Because the longer I sat in those breakout rooms, the quieter I became. My curiosity dulled, the topic’s urgency faded and I started doing what was expected: showing up, signing in and leaving seemingly unchanged. That terrified me. I could feel myself becoming the kind of learner I never wanted my students to be. Even the most dedicated teachers can wilt in the wrong conditions.

Advertisement

And here’s what often goes unspoken: teachers already give so much of their time to these sessions, spending hours after school on professional days and during planning periods. That investment deserves more than compliance-based sessions that leave teachers unchanged or walking away with a checklist of “next steps” that never take root.

After experiences like that, I find myself returning to familiar questions: Why do we accept for ourselves what we would never accept for our students? What if we taught students the way we teach teachers? We’d call it ineffective, parents would complain and administrators would intervene. Yet, the same approach is accepted for teachers’ professional development: lecture-heavy, one-size-fits-all and compliance-driven. I knew better for my students and kept accepting less for myself. That contradiction began to haunt me.

As a high school English teacher, I built lessons around engagement, differentiation and relevance so students could connect learning to their lives. They deserved instruction that met them where they were and balanced support with challenge. When it comes to teaching teachers, we need the same shift — from professional development done to us, to professional learning created by, for and with us.

A Different Way Exists

I remember one of the first times I felt what real professional learning could be. Around 2013, when Edcamp was spreading across schools, my administrators used this format for one of our PD days. These grassroots “unconferences” turned the familiar model upside down. There were no pre-approved presenters or hour-by-hour agendas. Teachers built the schedule on the spot and moved freely between conversations. If one wasn’t helpful, you left and found another. The emphasis was on curiosity and choice.

Advertisement

I led two sessions that day: one on digital tools for learning and another on equitable grading. I didn’t stand in front of the group; I sat in the circle. We tested tools in real time, pried into long-held grading beliefs and argued about what being fair really means in high school grading. What I remember most wasn’t the content but the energy in the room and the buzz of teachers thinking, building, disagreeing and learning together.

It was the first time I realized how much trust professional learning requires: trust in teachers’ intelligence, instincts and creativity. We talk so much about empowering students, but rarely about empowering teachers. Edcamp, brief as it was, made me wonder what would happen if we trusted educators the way we expect them to trust their students.

That lesson deepened through the Rhode Island Writing Project at Rhode Island College, where “teachers teaching teachers” wasn’t a slogan but a practice. During the summer institute, I joined a community of educators from across the state. We wrote together, shared feedback and listened, really listened, to each other’s classroom stories and the complex and messy overlap between personal and professional life.

That summer changed me. I saw what it meant to honor teacher knowledge, and to treat professional learning as a dialogue, not delivery. It ruined me, in the best way. Once you’ve experienced learning that is alive, reciprocal and demanding, it’s hard to sit quietly again while someone reads from slides.

Advertisement

But here’s what I know: those moments were rare. Outliers.

Most professional development since that summer has looked more like paperwork than pedagogy. Neatly packaged, disconnected and efficient to a fault. For many educators, PD is still something that happens to them, not with them. I’ve seen what that does. It breeds cynicism and convinces brilliant teachers that their professional growth is optional, even disposable. Novice and veteran teachers alike found ways to get through or get by during especially disconnected sessions. Not out of defiance, but self-preservation.

As a district administrator, I find myself in a very different position where I receive more structured opportunities for professional learning than I ever did in the classroom. I attend multi-day workshops on leadership frameworks, statewide coaching institutes and even regional conferences focused on instructional design. They’re well-planned, reflective, energizing and respectful of participants’ needs. Nothing like the one-off PowerPoints teachers sit through during the school day or after school.

That contrast is hard to ignore. It reminds me just how uneven our systems can be. The higher up you go, the more development you’re offered; the closer you are to students, the less you get.

Advertisement

I carry that discomfort with me every day. I think about the teachers in sessions I once led or attended who expressed their skepticism and tiredness of being told what to value or what new requirement to add to their already stacked list of classroom responsibilities. My job now is to make sure the professional learning I help design never repeats that pattern — that it respects their time, their expertise and their humanity. I don’t want them to feel the quiet resignation I once did.

This problem runs deeper than any one district or leader. A recent report from the Annenberg Institute at Brown University affirmed Rhode Island’s commitment to investing in professional learning. The report highlights state-level efforts such as expanding instructional coaching, building in common planning time and fostering cross-district collaboration. These are the supports I wish I’d had years ago.

The report also reminded me of what I’ve seen firsthand: resources and structures only work when the design honors teachers’ time and trust.

How We Teach Teachers

How we design professional learning makes visible the value we place on teachers. When PD is treated as a formality, the message is that teacher growth is optional. But when it’s treated as authentic learning, the message is clear: adult learning matters, and investing in teachers is investing in students.

Advertisement

If we want professional learning that serves educators and the students they teach, we must move beyond seat time and toward structures that honor teacher expertise and foster continuous improvement. The elements of strong professional learning aren’t mysteries; they mirror the same principles of good teaching.

A few approaches that work include teacher-led inquiry cycles that invite educators to identify problems of practice and design solutions together; offering choice and voice in sessions that make learning relevant and personalized; building in time for application and reflection; and creating job-embedded opportunities where teachers can learn in context alongside their colleagues and students.

The future of our profession will be defined by what we choose right now and whether we can model the kind of learning we say we want for our students.

Source link

Advertisement
Continue Reading

Tech

Claude Code is the Inflection Point

Published

on

About 4% of all public commits on GitHub are now being authored by Anthropic’s Claude Code, a terminal-native AI coding agent that has quickly become the centerpiece of a broader argument that software engineering is being fundamentally reshaped by AI.

SemiAnalysis, a semiconductor and AI research firm, published a report on Friday projecting that figure will climb past 20% by the end of 2026. Claude Code is a command-line tool that reads codebases, plans multi-step tasks and executes them autonomously. Anthropic’s quarterly revenue additions have overtaken OpenAI’s, according to SemiAnalysis’s internal economic model, and the firm believes Anthropic’s growth is now constrained primarily by available compute.

Accenture has signed on to train 30,000 professionals on Claude, the largest enterprise deployment so far, targeting financial services, life sciences, healthcare and the public sector. On January 12, Anthropic launched Cowork, a desktop-oriented extension of the same agent architecture — four engineers built it in 10 days, and most of the code was written by Claude Code itself.

Source link

Advertisement
Continue Reading

Tech

Week in Review: Most popular stories on GeekWire for the week of Jan. 25, 2026

Published

on

Get caught up on the latest technology and startup news from the past week. Here are the most popular stories on GeekWire for the week of Jan. 25, 2026.

Sign up to receive these updates every Sunday in your inbox by subscribing to our GeekWire Weekly email newsletter.

Most popular stories on GeekWire

Source link

Continue Reading

Tech

Payments platform BridgePay confirms ransomware attack behind outage

Published

on

credit card point-of-sale system

A major U.S. payment gateway and solutions provider says a ransomware attack has knocked key systems offline, triggering a widespread outage affecting multiple services.

The incident began on Friday and quickly escalated into a nationwide disruption across BridgePay’s platform.

Ransomware confirmed within hours of outage

BridgePay Network Solutions confirmed late Friday that the incident disrupting its payment gateway was caused by ransomware.

Wiz

In an update posted Feb. 6, the company said it has engaged federal law enforcement, including the FBI and U.S. Secret Service, along with external forensic and recovery teams.

“Initial forensic findings indicate that no payment card data has been compromised,” the company said, adding that any accessed files were encrypted and that there is currently “no evidence of usable data exposure.”

Advertisement

BleepingComputer has contacted BridgePay with questions about the ransomware group involved, which BridgePay has not yet named.

Merchants report cash-only payments

Around the same time BridgePay disclosed the incident, some U.S. merchants and organizations began telling customers they could only accept cash due to a nationwide card-processing outage.

One restaurant said its “credit card processing company had a cyber security breach” and that card payments were unavailable nationwide.

Restaurant says it can only take cash payments during a POS outage
Restaurant says it can only take cash during a point-of-sale outage

City of Palm Bay, Florida government announced:

“BridgePay Network Solutions, our third-party credit card processing vendor, is experiencing a nationwide service disruption. As a result, the City’s online billing payment portal is currently unavailable. We do not have an estimated restoration time.”

Advertisement

As such, the city government suggests that customers may make utility payments by cash, card, or check by appearing in person or, in limited cases, by calling the office.

Other organizations, including Lightspeed Commerce, ThriftTrac, and City of Frisco, Texas have reported service impacts from the BridgePay incident. 

Payment gateway services hit hard

BridgePay’s status page showed major outages across core production systems, including:

  • BridgePay Gateway API (BridgeComm)
  • PayGuardian Cloud API
  • MyBridgePay virtual terminal and reporting
  • Hosted payment pages
  • PathwayLink gateway and boarding portals

Early warning signs appeared around 3:29 a.m., when monitoring detected degraded performance across multiple services, beginning with the “Gateway.Itstgate.com – virtual terminal, reporting, API” systems.

The intermittent service degradation eventually cascaded into a full system outage.

Advertisement

Within hours, the company disclosed the incident was cybersecurity-related and later confirmed it was ransomware.

The breadth of affected systems suggests widespread disruption for merchants and payment integrators relying on the platform for card processing.

As of the latest update, BridgePay said recovery could take time and is being handled “in a secure and responsible manner,” while the company continues its forensic investigation.

The incident adds to a growing wave of ransomware attacks targeting payment infrastructure, where outages can quickly ripple through real-world commerce when transaction pipelines go down.

Advertisement

Modern IT infrastructure moves faster than manual workflows can handle.

In this new Tines guide, learn how your team can reduce hidden manual delays, improve reliability through automated response, and build and scale intelligent workflows on top of tools you already use.

Source link

Continue Reading

Tech

T-Mobile layoffs: Telecom giant cuts 393 jobs across Washington state, including VP roles

Published

on

(Photo by Mika Baumeister on Unsplash)

T-Mobile is laying off 393 workers in Washington as part of a new round of cuts, according to a filing with the state Employment Security Department released Monday morning.

More than 200 different job titles are impacted, according to the filing, including analysts, engineers and technicians, as well as directors and managers.

The cuts targeted nearly 210 senior- and director-level employees, plus seven employees with vice president or senior vice president titles. They include a senior VP of talent and four VP of legal affairs roles.

Affected employees worked at the company’s Bellevue headquarters; data centers in Bellevue and East Wenatchee; and at stores and other facilities in Bothell, Bellingham, Woodinville, Spokane Valley and elsewhere.

“As the next step in our evolution, we’re making some changes while continuing to hire to ensure we have the right focus, structure and momentum to keep changing the industry through innovation and a long-standing focus on customers,” the company said in an emailed statement. It added that it was responding to “a dynamic market.”

Advertisement

Affected employees were given 60-days’ notice and the departures are expected to take effect April 2.

A WARN filing submitted to the state and signed by Monica Frohock, senior director of the Magenta Service Center, attributed the cuts to “changing business needs.”

“These facilities are not being closed,” the notice stated. “The layoffs are not due to relocation or contracting out employer operations or employee positions, but it is possible that some work currently done by these employees may at some point be done by others.”

T-Mobile employed about 70,000 people as of Dec. 31, 2024. The company has nearly 8,000 workers in the Seattle region, according to LinkedIn.

Advertisement

The cuts come as the Seattle area is being hit by thousands of tech-related layoffs, including job losses at Amazon, Expedia, Meta, Zillow and other companies.

T-Mobile, the largest U.S. telecom company by market capitalization, laid off 121 workers in August 2025. In November, former Chief Operating Officer Srini Gopalan replaced longtime leader Mike Sievert as CEO.

T-Mobile’s stock is down nearly 20% over the past 12 months. The company reported revenue of $18.2 billion in the third quarter, up 9% year-over-year, and added 1 million new postpaid phone customers.

Verizon, another telecom giant, laid off approximately 165 employees in Washington in November.

Advertisement

Editor’s note: Story updated to include emailed comments from T-Mobile.

Source link

Continue Reading

Tech

Teaching Machines to Spot Human Errors in Math Assignments

Published

on

When completing math problems, students often have to show their work. It’s a method teachers use to catch errors in thinking, to make sure students are grasping mathematical concepts correctly.

New AI projects in development aim to automate that process. The idea is to train machines to catch and predict the errors students make when studying math, to better enable teachers to correct student misconceptions in real time.

For the first time ever, developers can now build fascinating algorithms into products that will help teachers without requiring them to understand machine learning, says Sarah Johnson, CEO at Teaching Lab, which provides professional development to teachers.

Some of these efforts trace back to the U.K.-based edtech platform Eedi Labs, which has held a series of coding competitions since 2020 intended to explore ways to use AI to boost math performance. The latest was held earlier this year, and it tried to use AI to capture misconceptions from multiple choice questions and accompanying student explanations. It relied upon Eedi Labs’ data but was run by The Learning Agency, an education consultancy firm in the U.S. A joint project with Vanderbilt University — and using Kaggle, a data science platform — the competition received support from the Gates Foundation and the Walton Family Foundation, and coding teams competed for $55,000 in awards.

Advertisement

The latest competition achieved “impressive” accuracy in predicting student misconceptions in math, according to Eedi Labs.

Researchers and edtech developers hope this kind of breakthrough can help bring useful AI applications into math classrooms — which have lagged behind in AI adoption, even as English instructors have had to rethink their writing assignments to account for student AI use. Some people have argued that, so far, there has been a conceptual problem with “mathbots.”

Perhaps training algorithms to identify common student math misconceptions could lead to the development of sophisticated tools to help teachers target instruction.

But is that enough to improve students’ declining math scores?

Advertisement

Solving the (Math) Problem

So far, the deluge of money pouring into artificial intelligence is unrelenting. Despite fears that the economy is in an “AI bubble”, edtech leaders hope that smart, research-backed uses of the technology will deliver gains for students.

In the early days of generative AI, people thought you could get good results by just hooking up an education platform to a large language model, says Johnson, of Teaching Lab. All these chatbot wrappers popped up, promising that teachers could create the best lesson plans using ChatGPT in their learning management systems.

But that’s not true, she says. You need to focus on applications of the technology that are trained on education-specific data to actually help classroom teachers, she adds.

That’s where Eedi Labs is trying to make a difference.

Advertisement

Currently, Eedi Labs sells an AI tutoring service for math. The model, which the company calls “human in the loop,” has human tutors check messages automatically generated by its platform before they are sent to students, and make edits when necessary.

Plus, through efforts like its recent competition, leaders of the platform think they can train machines to catch and predict the errors students make when studying math, further expediting learning.

But training machine learning algorithms to identify common math misconceptions a student holds isn’t all that easy.

Cutting Edge?

Whether these attempts to use AI to map student misconceptions prove useful depends on what computer scientists call “ground truth,” the quality of the data used to train the algorithms in the first place. That means it depends on the quality of the multiple choice math problem questions, and also of the misconceptions that those questions reveal, says Jim Malamut, a postdoctoral researcher at Stanford Graduate School of Education. Malamut is not affiliated with Eedi Labs or with The Learning Agency’s competition.

Advertisement

The approach in the latest competition is not groundbreaking, he argues.

The dataset used in this year’s misconceptions contest had teams sorting through student answers from multiple choice questions with brief rationales from students. For the company, it’s an advancement, since previous versions of the technology relied on multiple choice questions alone.

Still, Malamut describes the use of multiple choice questions as “curious” because he believes the competition chose to work with a “simplistic format” when the tools they are testing are better-suited to discern patterns in more complex and open-ended answers from students. That is, after all, an advantage of large language models, Malamut says. In education, psychometricians and other researchers relied on multiple choice questions for a long time because they are easier to scale, but with AI that shouldn’t be as much of a barrier, Malamut argues.

Pushed by declining U.S. scores on international assessments, in the last decade-plus the country has shifted toward “Next-Generation Assessments” which aim to test conceptual skills. It’s part of a larger shift by researchers to the idea of “assessment for learning,” which holds that assessment tools place emphasis on getting information that’s useful for teaching rather than what’s convenient for researchers to measure, according to Malamut.

Advertisement

Yet the competition relies on questions that clearly predate that trend, Malamut says, in a way that might not meet the moment

For example, some questions asked students to figure out which decimal was the largest, which sheds very little light on conceptual understanding. Instead, current research suggests that it’s better to have students write a decimal number using base 10 blocks or to point to missing decimals on a marked number line. Historically, these sorts of questions couldn’t be used in a large-scale assessment because they are too open-ended, Malamut says. But applying AI to current thinking around education research is precisely where AI could add the most value, Malamut adds.

But for the company developing these technologies, “holistic solutions” are important.

Eedi Labs blends multiple choice questions, adaptive assessments and open responses for a comprehensive diagnosis, says cofounder Simon Woodhead. This latest competition was the first to incorporate student responses, enabling deeper analysis, he adds.

Advertisement

But there’s a trade-off between the time it takes to give students these assessments and the insights they give teachers, Woodhead says. So the Eedi team thinks that a system that uses multiple choice questions is useful for scanning student comprehension inside a classroom. With just a device at the front of the class, a teacher can home in on misconceptions quickly, Woodhead says. Student explanations and adaptive assessments, in contrast, help with deeper analysis of misconceptions. Blending these gives teachers the most benefit, Woodhead argues. And the success of this latest competition convinced the company to further explore using student responses, Woodhead adds.

Still, some think the questions used in the competition were not fine-tuned enough.

Woodhead notes that the competition relied on broader definitions of what counts as a “misconception” than Eedi Labs usually does. Nonetheless, the company was impressed by the accuracy of the AI predictions in the competition, he says.

Others are less sure that it really captures student misunderstandings.

Advertisement

Education researchers now know a lot more about the kinds of questions that can get to the core of student thinking and reveal misconceptions that students may have than they used to, Malamut says. But many of the questions in the contest’s dataset don’t accomplish this well, he says. Even though the questions included multiple choice options and short answers, it could have used better-formed questions, Malamut thinks. There are ways to ask the questions that can bring out student ideas. Rather than asking students to answer a question about fractions, you could ask students to critique others’ reasoning processes. For example: “Jim added these fractions in this way, showing his work like this. Do you agree with him? Why or why not? Where did he make a mistake?”

Whether it’s found its final form, there is growing interest in these attempts to use AI, and that comes with money for exploring new tools.

From Computer Back to Human

The Trump administration is betting big on AI as a strategy for education, making federal dollars available. Some education researchers are enthusiastic, too, boosted by $26 million in funding from Digital Promise intended to help narrow the distance between best practices in education and AI.

These approaches are early, and the tools still need to be built and tested. Nevertheless, some argue it’s already paying off.

Advertisement

A randomized controlled trial conducted by Eedi Labs and Google DeepMind found that math tutoring that incorporated Eedi’s AI platform boosted student learning in 11- and 12-year-olds in the U.K. The study focused on the company’s “human in the loop” approach — using human-supervised AI tutoring — currently used in some classrooms. Within the U.S., the platform is used by 4,955 students across 39 K-12 schools, colleges and tutoring networks. Eedi Labs says it is conducting another randomized controlled trial in 2026 with Imagine Learning in the U.S.

Others have embraced a similar approach. For example, Teaching Lab is actively involved in work about AI for use in classrooms, with Johnson telling EdSurge that they are testing a model also based on data borrowed from Eedi and a company called Anet. That data model project is currently being tested with students, according to Johnson.

Several of these efforts require sharing tech insights and data. That runs counter to many companies’ typical practices for protecting intellectual property, according to the Eedi Labs CEO. But he thinks the practice will pay off. “We are very keen to be at the cutting edge, that means engaging with researchers, and we see sharing some data as a really great way to do this,” he wrote in an email.

Still, once the algorithms are trained, everyone seems to agree turning it into success in classrooms is another challenge.

Advertisement

What might that look like?

The data infrastructure can be built into products that let teachers modify curriculum based on the context of the classroom, Johnson says. If you can connect the infrastructure to student data and allow it to make inferences, it could provide teachers with useful advice, she adds.

Meg Benner, managing director of The Learning Agency, the organization that ran the misconceptions contest, suggests that this could be used to feed teachers information about which misconceptions their students are making, or to even trigger a chatbot-style lesson helping them to overcome those misconceptions.

It’s an interesting research project, says Johnson, of Teaching Lab. But once this model is fully built, it will still need to be tested to see if refined diagnosis actually leads to better interventions in front of teachers and students, she adds.

Advertisement

Some are skeptical that the ways companies will turn these into products may not enhance learning all that much. After all, having a chatbot-style tutoring system conclude that students are conducting additive reasoning when multiplicative reasoning is required may not transform math instruction. Indeed, some research has shown that students don’t respond well to chatbots. For instance, the famous 5 percent problem revealed that only the top students usually see results from most digital math programs. Instead, teachers have to handle misconceptions as they come up, some argue. That means students having an experience or conversation that exposes the limits of old ideas and the power of clear thinking. The challenge, then, is figuring out how to get the insights from the computer and machine analysis back out to the students.

But others think that the moment is exciting, even if there’s some hype.

“I’m cautiously optimistic,” says Malamut, the postdoctoral student at Stanford. Formative assessments and diagnostic tools exist now, but they are not automated, he says. True, the assessment data that’s easy to collect isn’t always the most helpful to teachers. But if used correctly, AI tools could possibly close that gap.

Source link

Advertisement
Continue Reading

Tech

Atomically Thin Materials Significantly Shrink Qubits

Published

on

Quantum computing is a devilishly complex technology, with many technical hurdles impacting its development. Of these challenges two critical issues stand out: miniaturization and qubit quality.

IBM has adopted the superconducting qubit road map of reaching a 1,121-qubit processor by 2023, leading to the expectation that 1,000 qubits with today’s qubit form factor is feasible. However, current approaches will require very large chips (50 millimeters on a side, or larger) at the scale of small wafers, or the use of chiplets on multichip modules. While this approach will work, the aim is to attain a better path toward scalability.

Now researchers at MIT have been able to both reduce the size of the qubits and done so in a way that reduces the interference that occurs between neighboring qubits. The MIT researchers have increased the number of superconducting qubits that can be added onto a device by a factor of 100.

“We are addressing both qubit miniaturization and quality,” said William Oliver, the director for the Center for Quantum Engineering at MIT. “Unlike conventional transistor scaling, where only the number really matters, for qubits, large numbers are not sufficient, they must also be high-performance. Sacrificing performance for qubit number is not a useful trade in quantum computing. They must go hand in hand.”

Advertisement

The key to this big increase in qubit density and reduction of interference comes down to the use of two-dimensional materials, in particular the 2D insulator hexagonal boron nitride (hBN). The MIT researchers demonstrated that a few atomic monolayers of hBN can be stacked to form the insulator in the capacitors of a superconducting qubit.

Just like other capacitors, the capacitors in these superconducting circuits take the form of a sandwich in which an insulator material is sandwiched between two metal plates. The big difference for these capacitors is that the superconducting circuits can operate only at extremely low temperatures—less than 0.02 degrees above absolute zero (-273.15 °C).

Golden dilution refrigerator hanging verticallySuperconducting qubits are measured at temperatures as low as 20 millikelvin in a dilution refrigerator.Nathan Fiske/MIT

In that environment, insulating materials that are available for the job, such as PE-CVD silicon oxide or silicon nitride, have quite a few defects that are too lossy for quantum computing applications. To get around these material shortcomings, most superconducting circuits use what are called coplanar capacitors. In these capacitors, the plates are positioned laterally to one another, rather than on top of one another.

As a result, the intrinsic silicon substrate below the plates and to a smaller degree the vacuum above the plates serve as the capacitor dielectric. Intrinsic silicon is chemically pure and therefore has few defects, and the large size dilutes the electric field at the plate interfaces, all of which leads to a low-loss capacitor. The lateral size of each plate in this open-face design ends up being quite large (typically 100 by 100 micrometers) in order to achieve the required capacitance.

Advertisement

In an effort to move away from the large lateral configuration, the MIT researchers embarked on a search for an insulator that has very few defects and is compatible with superconducting capacitor plates.

“We chose to study hBN because it is the most widely used insulator in 2D material research due to its cleanliness and chemical inertness,” said colead author Joel Wang, a research scientist in the Engineering Quantum Systems group of the MIT Research Laboratory for Electronics.

On either side of the hBN, the MIT researchers used the 2D superconducting material, niobium diselenide. One of the trickiest aspects of fabricating the capacitors was working with the niobium diselenide, which oxidizes in seconds when exposed to air, according to Wang. This necessitates that the assembly of the capacitor occur in a glove box filled with argon gas.

While this would seemingly complicate the scaling up of the production of these capacitors, Wang doesn’t regard this as a limiting factor.

Advertisement

“What determines the quality factor of the capacitor are the two interfaces between the two materials,” said Wang. “Once the sandwich is made, the two interfaces are “sealed” and we don’t see any noticeable degradation over time when exposed to the atmosphere.”

This lack of degradation is because around 90 percent of the electric field is contained within the sandwich structure, so the oxidation of the outer surface of the niobium diselenide does not play a significant role anymore. This ultimately makes the capacitor footprint much smaller, and it accounts for the reduction in cross talk between the neighboring qubits.

“The main challenge for scaling up the fabrication will be the wafer-scale growth of hBN and 2D superconductors like [niobium diselenide], and how one can do wafer-scale stacking of these films,” added Wang.

Wang believes that this research has shown 2D hBN to be a good insulator candidate for superconducting qubits. He says that the groundwork the MIT team has done will serve as a road map for using other hybrid 2D materials to build superconducting circuits.

Advertisement

Source link

Continue Reading

Tech

How AI Will Change Chip Design

Published

on

The end of Moore’s Law is looming. Engineers and designers can do only so much to miniaturize transistors and pack as many of them as possible into chips. So they’re turning to other approaches to chip design, incorporating technologies like AI into the process.

Samsung, for instance, is adding AI to its memory chips to enable processing in memory, thereby saving energy and speeding up machine learning. Speaking of speed, Google’s TPU V4 AI chip has doubled its processing power compared with that of its previous version.

But AI holds still more promise and potential for the semiconductor industry. To better understand how AI is set to revolutionize chip design, we spoke with Heather Gorr, senior product manager for MathWorksMATLAB platform.

How is AI currently being used to design the next generation of chips?

Advertisement

Heather Gorr: AI is such an important technology because it’s involved in most parts of the cycle, including the design and manufacturing process. There’s a lot of important applications here, even in the general process engineering where we want to optimize things. I think defect detection is a big one at all phases of the process, especially in manufacturing. But even thinking ahead in the design process, [AI now plays a significant role] when you’re designing the light and the sensors and all the different components. There’s a lot of anomaly detection and fault mitigation that you really want to consider.

Portrait of a woman with blonde-red hair smiling at the cameraHeather GorrMathWorks

Then, thinking about the logistical modeling that you see in any industry, there is always planned downtime that you want to mitigate; but you also end up having unplanned downtime. So, looking back at that historical data of when you’ve had those moments where maybe it took a bit longer than expected to manufacture something, you can take a look at all of that data and use AI to try to identify the proximate cause or to see something that might jump out even in the processing and design phases. We think of AI oftentimes as a predictive tool, or as a robot doing something, but a lot of times you get a lot of insight from the data through AI.

What are the benefits of using AI for chip design?

Gorr: Historically, we’ve seen a lot of physics-based modeling, which is a very intensive process. We want to do a reduced order model, where instead of solving such a computationally expensive and extensive model, we can do something a little cheaper. You could create a surrogate model, so to speak, of that physics-based model, use the data, and then do your parameter sweeps, your optimizations, your Monte Carlo simulations using the surrogate model. That takes a lot less time computationally than solving the physics-based equations directly. So, we’re seeing that benefit in many ways, including the efficiency and economy that are the results of iterating quickly on the experiments and the simulations that will really help in the design.

Advertisement

So it’s like having a digital twin in a sense?

Gorr: Exactly. That’s pretty much what people are doing, where you have the physical system model and the experimental data. Then, in conjunction, you have this other model that you could tweak and tune and try different parameters and experiments that let sweep through all of those different situations and come up with a better design in the end.

So, it’s going to be more efficient and, as you said, cheaper?

Gorr: Yeah, definitely. Especially in the experimentation and design phases, where you’re trying different things. That’s obviously going to yield dramatic cost savings if you’re actually manufacturing and producing [the chips]. You want to simulate, test, experiment as much as possible without making something using the actual process engineering.

Advertisement

We’ve talked about the benefits. How about the drawbacks?

Gorr: The [AI-based experimental models] tend to not be as accurate as physics-based models. Of course, that’s why you do many simulations and parameter sweeps. But that’s also the benefit of having that digital twin, where you can keep that in mind—it’s not going to be as accurate as that precise model that we’ve developed over the years.

Both chip design and manufacturing are system intensive; you have to consider every little part. And that can be really challenging. It’s a case where you might have models to predict something and different parts of it, but you still need to bring it all together.

One of the other things to think about too is that you need the data to build the models. You have to incorporate data from all sorts of different sensors and different sorts of teams, and so that heightens the challenge.

Advertisement

How can engineers use AI to better prepare and extract insights from hardware or sensor data?

Gorr: We always think about using AI to predict something or do some robot task, but you can use AI to come up with patterns and pick out things you might not have noticed before on your own. People will use AI when they have high-frequency data coming from many different sensors, and a lot of times it’s useful to explore the frequency domain and things like data synchronization or resampling. Those can be really challenging if you’re not sure where to start.

One of the things I would say is, use the tools that are available. There’s a vast community of people working on these things, and you can find lots of examples [of applications and techniques] on GitHub or MATLAB Central, where people have shared nice examples, even little apps they’ve created. I think many of us are buried in data and just not sure what to do with it, so definitely take advantage of what’s already out there in the community. You can explore and see what makes sense to you, and bring in that balance of domain knowledge and the insight you get from the tools and AI.

What should engineers and designers consider when using AI for chip design?

Advertisement

Gorr: Think through what problems you’re trying to solve or what insights you might hope to find, and try to be clear about that. Consider all of the different components, and document and test each of those different parts. Consider all of the people involved, and explain and hand off in a way that is sensible for the whole team.

How do you think AI will affect chip designers’ jobs?

Gorr: It’s going to free up a lot of human capital for more advanced tasks. We can use AI to reduce waste, to optimize the materials, to optimize the design, but then you still have that human involved whenever it comes to decision-making. I think it’s a great example of people and technology working hand in hand. It’s also an industry where all people involved—even on the manufacturing floor—need to have some level of understanding of what’s happening, so this is a great industry for advancing AI because of how we test things and how we think about them before we put them on the chip.

How do you envision the future of AI and chip design?

Advertisement

Gorr: It’s very much dependent on that human element—involving people in the process and having that interpretable model. We can do many things with the mathematical minutiae of modeling, but it comes down to how people are using it, how everybody in the process is understanding and applying it. Communication and involvement of people of all skill levels in the process are going to be really important. We’re going to see less of those superprecise predictions and more transparency of information, sharing, and that digital twin—not only using AI but also using our human knowledge and all of the work that many people have done over the years.

From Your Site Articles

Related Articles Around the Web

Source link

Advertisement
Continue Reading

Tech

Nvidia rival Cerebras raises $1bn at $23bn valuation

Published

on

Cerebras raised $1.1bn in a previous round last September at an $8.1bn post-money valuation.

Cerebras Systems, the AI chipmaker aiming to rival Nvidia, has raised $1bn in a Series H round led by Tiger Global with participation from AMD. The raise values the company at around $23bn, nearly triple the valuation made a little over four months ago.

Other backers in this round include Benchmark; Fidelity Management & Research Company; Atreides Management; Alpha Wave Global; Altimeter; Coatue; and 1789 Capital, among others.

The new round comes after Cerebras raised $1.1bn last September at an $8.1bn post-money valuation backed by several of the same investors.

Advertisement

Just days later, the company withdrew from a planned initial public offering (IPO) without providing an official reason. At the time of the IPO filing in 2024, there was criticism around its heavy reliance on a single United Arab Emirates-based customer, the Microsoft-backed G42.

Cerebras still intends to go IPO as soon as possible, it said.

The recent raise better positions the company to compete with global AI chip leader Nvidia. Cerebras claims that it builds the “fastest AI infrastructure in the world” and company CEO Andrew Feldman has also gone on record to say that his hardware runs AI models multiple times faster than that of Nvidia’s.

Cerebras is behind WSE-3, touted to be the “largest” AI chip ever built, with 19-times more transistors and 28-times more compute that the Nvidia B200, according to the company.

Advertisement

The company has a close connection with OpenAI, according to statements made by both Feldman and OpenAI chief Sam Altman – who happens to be an early investor in the chipmaker. Last month, the two announced a partnership to deploy 750MW of Cerebras’s wafer-scale systems to make OpenAI’s chatbots faster.

OpenAI – a voracious user of Nvidia’s AI technology – has been in search of alternatives,  although that’s not to say that OpenAI is backing down from using Nvidia technology in the future.

Last year, OpenAI drew up a 6GW agreement with AMD to power its AI infrastructure. The first 1GW deployment of AMD Instinct MI450 GPUs is set to begin in the second half of 2026.

At the time of the announcement, Altman said that the deal was “incremental” to OpenAI’s work with Nvidia. “We plan to increase our Nvidia purchasing over time”, he added.

Advertisement

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

Medtronic and University of Galway open device prototype hub

Published

on

The facility is part of a five-year, €5m signature innovation partnership between Medtronic and the university.

US and Irish medical device company Medtronic and the University of Galway have launched their Medical Device Prototype Hub, a specialist facility designed to support the medtech ecosystem, STEM engagement and research.

Development of the hub, which belongs to the university’s new Technology Services Directorate, is part of a five-year, €5m signature innovation partnership between Medtronic and the university. 

Professor David Burn, the president of the university, said: “The launch of the Medical Device Prototype Hub at University of Galway marks a hugely significant milestone in our signature partnership with Medtronic, but it also sends a strong message to all those in the sector and all those who are driving innovation.

Advertisement

“University of Galway is creating the ecosystem in which our partners in research and innovation can thrive. We look forward to celebrating the breakthroughs and successes that this initiative enables.”

The Medical Device Prototype Hub forms part of the Institute for Health Discovery and Innovation, which was established at the university in 2024.

It will be further supported via collaborations with government agencies and industry leaders, aiming to create a collaborative environment that promotes innovation and regional growth in life sciences and medical technologies. 

The university said that the hub has a range of expert staff to facilitate concept creation, development and manufacturing of innovative medical device prototypes.

Advertisement

It offers a suite of services to support early-stage medical device innovation – for example, virtual and physical prototyping – that enables rapid design iteration through computer aided design, modelling and simulation.  

“The Technology Services Directorate brings together key research facilities that support fundamental research at University of Galway,” said Aoife Duffy, the head of the directorate. 

“It aims to advance our research excellence by bringing together state-of-the-art core facilities and making strategic decisions on infrastructure and investment. The new prototype hub significantly enhances the innovation pathway available for the university research community and wider, and we look forward to working with Medtronic on this partnership.” 

Ronan Rogers, senior R&D director at Medtronic, added: “Today’s launch of the Medical Device Prototype Hub represents an exciting next step in our long‑standing partnership with University of Galway. Medtronic has deep roots in the west of Ireland, and this facility strengthens a shared commitment to advancing research, accelerating innovation and developing the next generation of medical technologies. 

Advertisement

“We are proud to invest in an ecosystem that not only drives technological progress but also supports talent development. This hub will unlock new avenues for discovery and accelerate the path from promising ideas to real‑world medical solutions for patients.”

Just last week (27 January), two University of Galway projects won proof-of-concept grants from the European Research Council. One of the winning Galway projects is called Concept-AM and is being led by Prof Ted Vaughan, who is also involved with the new hub.

Concept-AM aims to advance software that enables engineers to design lighter, stronger and more efficient components optimised for 3D printing across biomedical, automotive and aerospace applications, creating complex and lightweight parts with less material waste.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025