Connect with us

Tech

Apple iPhone 17 vs iPhone 16: Should you upgrade?

Published

on

Need a new iPhone but aren’t sure whether to opt for the latest iPhone 17, or to save a bit of money and get 2024’s iPhone 16? You’ve come to the right place.

While not all of us necessarily need the latest flagship smartphone, and opting for an older one is a great way to save money, many worry that there could be too much of a sacrifice. After all, smartphones are ingrained in our everyday lives so they need to be reliable.

With this in mind, we’ve compared our reviews of the iPhone 17 to the iPhone 16 so you can decide which handset to go for.

Otherwise, make sure you visit our list of the best smartphones and, if you aren’t yet sold on an iPhone, our best Android phones will offer our favourite alternatives.

Advertisement

Price and Availability

The iPhone 17 has a starting RRP of £799/$799, which is unsurprisingly more expensive than its younger sibling. However, it’s worth noting that this price is for the 256GB-sized handset.

Advertisement

SQUIRREL_PLAYLIST_10207955

In comparison, while the iPhone 16 starts at a cheaper £699/$699, this is for a much smaller 128GB-sized handset. In fact, if you want to upgrade to 256GB, then its RRP rises to more than the iPhone 17, at £899/$899. 

Advertisement

Design

  • Both the iPhone 17 and iPhone 16 share the same design
  • iPhone 17 is fitted with Ceramic Shield 2
  • Both include the Action and Camera Control buttons

Other than their colour selection, and the iPhone 17 being slightly bigger, there isn’t much difference between the two iPhone’s designs. Both sport the same flat edged, rounded corner design that was first introduced with the iPhone 12 – and this certainly isn’t a bad thing. Even so, there are a few tweaks with the iPhone 17 that although might not be visible, help make the handset feel more premium.

Firstly, the iPhone 17 sports Apple’s Ceramic Shield 2 protection on both the front and back, whereas the iPhone 16 is fitted with the older Ceramic Shield. Apple claims that Ceramic Shield 2 is more durable than its predecessor and should prevent micro-scratches from forming. Admittedly, we didn’t put the iPhone 17 through particularly wild tests to determine whether this is true, we still found that the panels remained scratch-free after prolonged use. 

Advertisement

Otherwise, both the iPhone 17 and 16 have an IP68-rating and include the reprogrammable Action and Camera Control button.

Winner: iPhone 17

Advertisement

Screen

  • iPhone 17 benefits from a 120Hz refresh rate while the iPhone 16 maxes out at 60Hz
  • The iPhone 17’s screen is slightly bigger at 6.3-inches
  • Both are OLED displays

Apple has finally taken the lead from the best Android phones (and even the majority of the best mid-range phones too) and introduced a 120Hz refresh rate to the iPhone 17. Coined ProMotion, the LTPO-enabled technology was previously reserved for its Pro models which was a huge bugbear for many. Instead, the iPhone 16 sports just a 60Hz refresh rate.

Using an iPhone 17Using an iPhone 17
iPhone 17. Image Credit (Trusted Reviews)

Advertisement

As expected, the inclusion of ProMotion makes the iPhone 17 feel impressively smooth in both everyday use and when gaming too, especially in comparison to the iPhone 16. In fact, we hailed the iPhone 17 as having “the best screen yet on an entry-level iPhone”. 

Otherwise, the iPhone 17 is actually slightly bigger than the iPhone 16, at 6.3-inches compared to 6.1-inches. Even so, both panels are OLED and support HDR10 and Dolby Vision content.

Winner: iPhone 17

Camera

  • Neither handset has a dedicated zoom lens but include a 2x in-sensor zoom instead
  • Both have main and ultrawide rear lenses, but the iPhone 17’s are both 48MP
  • The iPhone 17 has an upgraded 18MP square selfie camera

Apple made many thoughtful improvements with the iPhone 17’s camera hardware. While we’d still recommend opting for the iPhone 17 Pro if you’re serious about photography, the iPhone 17 is a brilliant choice for most casual snappers.

While both the iPhone 16 and iPhone 17 are equipped with a 48MP main lens which deliver consistently sharp and detailed shots, the iPhone 17 benefits from a 48MP ultrawide whereas the iPhone 16’s is just 12MP. The difference, perhaps unsurprisingly, is enormous as we found the iPhone 17 delivers a big jump in overall resolution and better low-light shots too.

Advertisement

Advertisement

Image captured on iPhone 17Image captured on iPhone 17
Captured on iPhone 17. Image Credit (Trusted Reviews)

One area which lets both the iPhone 17 and iPhone 16 down is the lack of dedicated zoom lens, like their Pro alternatives. Even so, both handsets are fitted with an in-sensor 2x zoom instead, which allows you to get closer without sacrificing quality and detail too. 

While the iPhone 16’s 12MP front lens is undoubtedly decent, the iPhone 17 boasts a welcome upgrade. Not only is the front camera 18MP but it’s now a square sensor which allows you to shoot portrait and landscape shots without actually having to rotate your phone. It may sound small, but it’s a seriously brilliant tweak.

Winner: iPhone 17

Performance

  • A19 vs A18 chips
  • The iPhone 17’s 120Hz refresh rate makes gaming and scrolling feel smoother
  • Apple has ditched the original 128GB storage option for 256GB with the iPhone 17

Although neither the iPhone 17 nor iPhone 16 are quite as powerful as their respective Pro siblings, both offer brilliant performance that’s enough for most users. In fact, unless you’re playing high-res AAA titles or editing multiple 4K video streams in LumaFusion, you’re unlikely to notice a difference.

Advertisement

Advertisement

Powering the iPhone 17 is Apple’s A19 chip which, when paired with the 120Hz refresh, ensures apps open instantly, scrolling feels smooth and you can comfortably achieve high frame rates in games too. 

iPhone 16 screeniPhone 16 screen
iPhone 16. Image Credit (Trusted Reviews)

Instead, the iPhone 16 runs on Apple’s A18 chip and remains a capable smartphone – even over a year on. In fact, we found in our benchmarking tests that it doesn’t come that far behind the iPhone 16 Pro Max. The biggest nuisance with the iPhone 16 is that it caps out at a 60Hz refresh rate. Even so, if you’re coming from an even older phone, you’re unlikely to notice this too much. 

Winner: iPhone 17

Software

  • Both support iOS 26
  • New Liquid Glass interface is easy to use and, we think, looks great
  • Apple Intelligence remains an afterthought

When the iPhone 16 launched back in 2024, arguably one of the reasons to buy the phone was the promise of the vast Apple Intelligence toolkit. Unfortunately, nearly two years on, Apple Intelligence still hasn’t quite come into its own.

Advertisement

Siri on iPhone 17Siri on iPhone 17
iPhone 17 Siri. Image Credit (Trusted Reviews)

Sure, Writing Tools is somewhat useful and Image Playground is fun for a while, but generally the AI toolkit fails to impress – especially when Gemini really does help to enhance the best Android phones. Essentially, with both the iPhone 17 and iPhone 16, we wouldn’t recommend buying either purely for Apple Intelligence. 

Otherwise, both the iPhones support iOS 26. Overall we don’t have many qualms with iOS 26 and find the software is polished, easy-to-use and feels familiar, even with the new Liquid Glass design. 

Advertisement

Winner: Tie

Battery

  • Both offer all-day battery life
  • iPhone 17 benefits from faster 40W wired charging
  • Both support a max 25W wireless charging

Advertisement

Apple has never boasted a strong reputation for battery life, especially when compared to many of the best Android phones which sport seriously mighty cells. Even so, we found that both the iPhone 17 and iPhone 16 are solid all-day handsets, as we easily ended days with some charge remaining.

Plus, if you want to top up during the day then it’s good to know both support wireless charging too.

However, the iPhone 17 benefits from faster 40W wired charging, which we found took around 85 minutes to reach 100%. In comparison, the iPhone 16 supports slightly slower speeds of 30W which took around 100 minutes to fully recharge.

Advertisement

Winner: iPhone 17

Verdict

With a 120Hz refresh rate, powerful processor and improved camera camera hardware, the iPhone 17 is an easy recommendation for many – especially if you’re coming from an older iPhone. 

Having said that, if you aren’t too fussed about having the absolute latest technologies and want to get a new-ish iPhone but without the high price tag, then the iPhone 16 remains a solid choice.

Advertisement

Advertisement

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Atomically Thin Materials Significantly Shrink Qubits

Published

on

Quantum computing is a devilishly complex technology, with many technical hurdles impacting its development. Of these challenges two critical issues stand out: miniaturization and qubit quality.

IBM has adopted the superconducting qubit road map of reaching a 1,121-qubit processor by 2023, leading to the expectation that 1,000 qubits with today’s qubit form factor is feasible. However, current approaches will require very large chips (50 millimeters on a side, or larger) at the scale of small wafers, or the use of chiplets on multichip modules. While this approach will work, the aim is to attain a better path toward scalability.

Now researchers at MIT have been able to both reduce the size of the qubits and done so in a way that reduces the interference that occurs between neighboring qubits. The MIT researchers have increased the number of superconducting qubits that can be added onto a device by a factor of 100.

“We are addressing both qubit miniaturization and quality,” said William Oliver, the director for the Center for Quantum Engineering at MIT. “Unlike conventional transistor scaling, where only the number really matters, for qubits, large numbers are not sufficient, they must also be high-performance. Sacrificing performance for qubit number is not a useful trade in quantum computing. They must go hand in hand.”

Advertisement

The key to this big increase in qubit density and reduction of interference comes down to the use of two-dimensional materials, in particular the 2D insulator hexagonal boron nitride (hBN). The MIT researchers demonstrated that a few atomic monolayers of hBN can be stacked to form the insulator in the capacitors of a superconducting qubit.

Just like other capacitors, the capacitors in these superconducting circuits take the form of a sandwich in which an insulator material is sandwiched between two metal plates. The big difference for these capacitors is that the superconducting circuits can operate only at extremely low temperatures—less than 0.02 degrees above absolute zero (-273.15 °C).

Golden dilution refrigerator hanging verticallySuperconducting qubits are measured at temperatures as low as 20 millikelvin in a dilution refrigerator.Nathan Fiske/MIT

In that environment, insulating materials that are available for the job, such as PE-CVD silicon oxide or silicon nitride, have quite a few defects that are too lossy for quantum computing applications. To get around these material shortcomings, most superconducting circuits use what are called coplanar capacitors. In these capacitors, the plates are positioned laterally to one another, rather than on top of one another.

As a result, the intrinsic silicon substrate below the plates and to a smaller degree the vacuum above the plates serve as the capacitor dielectric. Intrinsic silicon is chemically pure and therefore has few defects, and the large size dilutes the electric field at the plate interfaces, all of which leads to a low-loss capacitor. The lateral size of each plate in this open-face design ends up being quite large (typically 100 by 100 micrometers) in order to achieve the required capacitance.

Advertisement

In an effort to move away from the large lateral configuration, the MIT researchers embarked on a search for an insulator that has very few defects and is compatible with superconducting capacitor plates.

“We chose to study hBN because it is the most widely used insulator in 2D material research due to its cleanliness and chemical inertness,” said colead author Joel Wang, a research scientist in the Engineering Quantum Systems group of the MIT Research Laboratory for Electronics.

On either side of the hBN, the MIT researchers used the 2D superconducting material, niobium diselenide. One of the trickiest aspects of fabricating the capacitors was working with the niobium diselenide, which oxidizes in seconds when exposed to air, according to Wang. This necessitates that the assembly of the capacitor occur in a glove box filled with argon gas.

While this would seemingly complicate the scaling up of the production of these capacitors, Wang doesn’t regard this as a limiting factor.

Advertisement

“What determines the quality factor of the capacitor are the two interfaces between the two materials,” said Wang. “Once the sandwich is made, the two interfaces are “sealed” and we don’t see any noticeable degradation over time when exposed to the atmosphere.”

This lack of degradation is because around 90 percent of the electric field is contained within the sandwich structure, so the oxidation of the outer surface of the niobium diselenide does not play a significant role anymore. This ultimately makes the capacitor footprint much smaller, and it accounts for the reduction in cross talk between the neighboring qubits.

“The main challenge for scaling up the fabrication will be the wafer-scale growth of hBN and 2D superconductors like [niobium diselenide], and how one can do wafer-scale stacking of these films,” added Wang.

Wang believes that this research has shown 2D hBN to be a good insulator candidate for superconducting qubits. He says that the groundwork the MIT team has done will serve as a road map for using other hybrid 2D materials to build superconducting circuits.

Advertisement

Source link

Continue Reading

Tech

How AI Will Change Chip Design

Published

on

The end of Moore’s Law is looming. Engineers and designers can do only so much to miniaturize transistors and pack as many of them as possible into chips. So they’re turning to other approaches to chip design, incorporating technologies like AI into the process.

Samsung, for instance, is adding AI to its memory chips to enable processing in memory, thereby saving energy and speeding up machine learning. Speaking of speed, Google’s TPU V4 AI chip has doubled its processing power compared with that of its previous version.

But AI holds still more promise and potential for the semiconductor industry. To better understand how AI is set to revolutionize chip design, we spoke with Heather Gorr, senior product manager for MathWorksMATLAB platform.

How is AI currently being used to design the next generation of chips?

Advertisement

Heather Gorr: AI is such an important technology because it’s involved in most parts of the cycle, including the design and manufacturing process. There’s a lot of important applications here, even in the general process engineering where we want to optimize things. I think defect detection is a big one at all phases of the process, especially in manufacturing. But even thinking ahead in the design process, [AI now plays a significant role] when you’re designing the light and the sensors and all the different components. There’s a lot of anomaly detection and fault mitigation that you really want to consider.

Portrait of a woman with blonde-red hair smiling at the cameraHeather GorrMathWorks

Then, thinking about the logistical modeling that you see in any industry, there is always planned downtime that you want to mitigate; but you also end up having unplanned downtime. So, looking back at that historical data of when you’ve had those moments where maybe it took a bit longer than expected to manufacture something, you can take a look at all of that data and use AI to try to identify the proximate cause or to see something that might jump out even in the processing and design phases. We think of AI oftentimes as a predictive tool, or as a robot doing something, but a lot of times you get a lot of insight from the data through AI.

What are the benefits of using AI for chip design?

Gorr: Historically, we’ve seen a lot of physics-based modeling, which is a very intensive process. We want to do a reduced order model, where instead of solving such a computationally expensive and extensive model, we can do something a little cheaper. You could create a surrogate model, so to speak, of that physics-based model, use the data, and then do your parameter sweeps, your optimizations, your Monte Carlo simulations using the surrogate model. That takes a lot less time computationally than solving the physics-based equations directly. So, we’re seeing that benefit in many ways, including the efficiency and economy that are the results of iterating quickly on the experiments and the simulations that will really help in the design.

Advertisement

So it’s like having a digital twin in a sense?

Gorr: Exactly. That’s pretty much what people are doing, where you have the physical system model and the experimental data. Then, in conjunction, you have this other model that you could tweak and tune and try different parameters and experiments that let sweep through all of those different situations and come up with a better design in the end.

So, it’s going to be more efficient and, as you said, cheaper?

Gorr: Yeah, definitely. Especially in the experimentation and design phases, where you’re trying different things. That’s obviously going to yield dramatic cost savings if you’re actually manufacturing and producing [the chips]. You want to simulate, test, experiment as much as possible without making something using the actual process engineering.

Advertisement

We’ve talked about the benefits. How about the drawbacks?

Gorr: The [AI-based experimental models] tend to not be as accurate as physics-based models. Of course, that’s why you do many simulations and parameter sweeps. But that’s also the benefit of having that digital twin, where you can keep that in mind—it’s not going to be as accurate as that precise model that we’ve developed over the years.

Both chip design and manufacturing are system intensive; you have to consider every little part. And that can be really challenging. It’s a case where you might have models to predict something and different parts of it, but you still need to bring it all together.

One of the other things to think about too is that you need the data to build the models. You have to incorporate data from all sorts of different sensors and different sorts of teams, and so that heightens the challenge.

Advertisement

How can engineers use AI to better prepare and extract insights from hardware or sensor data?

Gorr: We always think about using AI to predict something or do some robot task, but you can use AI to come up with patterns and pick out things you might not have noticed before on your own. People will use AI when they have high-frequency data coming from many different sensors, and a lot of times it’s useful to explore the frequency domain and things like data synchronization or resampling. Those can be really challenging if you’re not sure where to start.

One of the things I would say is, use the tools that are available. There’s a vast community of people working on these things, and you can find lots of examples [of applications and techniques] on GitHub or MATLAB Central, where people have shared nice examples, even little apps they’ve created. I think many of us are buried in data and just not sure what to do with it, so definitely take advantage of what’s already out there in the community. You can explore and see what makes sense to you, and bring in that balance of domain knowledge and the insight you get from the tools and AI.

What should engineers and designers consider when using AI for chip design?

Advertisement

Gorr: Think through what problems you’re trying to solve or what insights you might hope to find, and try to be clear about that. Consider all of the different components, and document and test each of those different parts. Consider all of the people involved, and explain and hand off in a way that is sensible for the whole team.

How do you think AI will affect chip designers’ jobs?

Gorr: It’s going to free up a lot of human capital for more advanced tasks. We can use AI to reduce waste, to optimize the materials, to optimize the design, but then you still have that human involved whenever it comes to decision-making. I think it’s a great example of people and technology working hand in hand. It’s also an industry where all people involved—even on the manufacturing floor—need to have some level of understanding of what’s happening, so this is a great industry for advancing AI because of how we test things and how we think about them before we put them on the chip.

How do you envision the future of AI and chip design?

Advertisement

Gorr: It’s very much dependent on that human element—involving people in the process and having that interpretable model. We can do many things with the mathematical minutiae of modeling, but it comes down to how people are using it, how everybody in the process is understanding and applying it. Communication and involvement of people of all skill levels in the process are going to be really important. We’re going to see less of those superprecise predictions and more transparency of information, sharing, and that digital twin—not only using AI but also using our human knowledge and all of the work that many people have done over the years.

From Your Site Articles

Related Articles Around the Web

Source link

Advertisement
Continue Reading

Tech

Nvidia rival Cerebras raises $1bn at $23bn valuation

Published

on

Cerebras raised $1.1bn in a previous round last September at an $8.1bn post-money valuation.

Cerebras Systems, the AI chipmaker aiming to rival Nvidia, has raised $1bn in a Series H round led by Tiger Global with participation from AMD. The raise values the company at around $23bn, nearly triple the valuation made a little over four months ago.

Other backers in this round include Benchmark; Fidelity Management & Research Company; Atreides Management; Alpha Wave Global; Altimeter; Coatue; and 1789 Capital, among others.

The new round comes after Cerebras raised $1.1bn last September at an $8.1bn post-money valuation backed by several of the same investors.

Advertisement

Just days later, the company withdrew from a planned initial public offering (IPO) without providing an official reason. At the time of the IPO filing in 2024, there was criticism around its heavy reliance on a single United Arab Emirates-based customer, the Microsoft-backed G42.

Cerebras still intends to go IPO as soon as possible, it said.

The recent raise better positions the company to compete with global AI chip leader Nvidia. Cerebras claims that it builds the “fastest AI infrastructure in the world” and company CEO Andrew Feldman has also gone on record to say that his hardware runs AI models multiple times faster than that of Nvidia’s.

Cerebras is behind WSE-3, touted to be the “largest” AI chip ever built, with 19-times more transistors and 28-times more compute that the Nvidia B200, according to the company.

Advertisement

The company has a close connection with OpenAI, according to statements made by both Feldman and OpenAI chief Sam Altman – who happens to be an early investor in the chipmaker. Last month, the two announced a partnership to deploy 750MW of Cerebras’s wafer-scale systems to make OpenAI’s chatbots faster.

OpenAI – a voracious user of Nvidia’s AI technology – has been in search of alternatives,  although that’s not to say that OpenAI is backing down from using Nvidia technology in the future.

Last year, OpenAI drew up a 6GW agreement with AMD to power its AI infrastructure. The first 1GW deployment of AMD Instinct MI450 GPUs is set to begin in the second half of 2026.

At the time of the announcement, Altman said that the deal was “incremental” to OpenAI’s work with Nvidia. “We plan to increase our Nvidia purchasing over time”, he added.

Advertisement

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

Medtronic and University of Galway open device prototype hub

Published

on

The facility is part of a five-year, €5m signature innovation partnership between Medtronic and the university.

US and Irish medical device company Medtronic and the University of Galway have launched their Medical Device Prototype Hub, a specialist facility designed to support the medtech ecosystem, STEM engagement and research.

Development of the hub, which belongs to the university’s new Technology Services Directorate, is part of a five-year, €5m signature innovation partnership between Medtronic and the university. 

Professor David Burn, the president of the university, said: “The launch of the Medical Device Prototype Hub at University of Galway marks a hugely significant milestone in our signature partnership with Medtronic, but it also sends a strong message to all those in the sector and all those who are driving innovation.

Advertisement

“University of Galway is creating the ecosystem in which our partners in research and innovation can thrive. We look forward to celebrating the breakthroughs and successes that this initiative enables.”

The Medical Device Prototype Hub forms part of the Institute for Health Discovery and Innovation, which was established at the university in 2024.

It will be further supported via collaborations with government agencies and industry leaders, aiming to create a collaborative environment that promotes innovation and regional growth in life sciences and medical technologies. 

The university said that the hub has a range of expert staff to facilitate concept creation, development and manufacturing of innovative medical device prototypes.

Advertisement

It offers a suite of services to support early-stage medical device innovation – for example, virtual and physical prototyping – that enables rapid design iteration through computer aided design, modelling and simulation.  

“The Technology Services Directorate brings together key research facilities that support fundamental research at University of Galway,” said Aoife Duffy, the head of the directorate. 

“It aims to advance our research excellence by bringing together state-of-the-art core facilities and making strategic decisions on infrastructure and investment. The new prototype hub significantly enhances the innovation pathway available for the university research community and wider, and we look forward to working with Medtronic on this partnership.” 

Ronan Rogers, senior R&D director at Medtronic, added: “Today’s launch of the Medical Device Prototype Hub represents an exciting next step in our long‑standing partnership with University of Galway. Medtronic has deep roots in the west of Ireland, and this facility strengthens a shared commitment to advancing research, accelerating innovation and developing the next generation of medical technologies. 

Advertisement

“We are proud to invest in an ecosystem that not only drives technological progress but also supports talent development. This hub will unlock new avenues for discovery and accelerate the path from promising ideas to real‑world medical solutions for patients.”

Just last week (27 January), two University of Galway projects won proof-of-concept grants from the European Research Council. One of the winning Galway projects is called Concept-AM and is being led by Prof Ted Vaughan, who is also involved with the new hub.

Concept-AM aims to advance software that enables engineers to design lighter, stronger and more efficient components optimised for 3D printing across biomedical, automotive and aerospace applications, creating complex and lightweight parts with less material waste.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Tech

Shokz OpenFit Pro, Nex Playground, Sony A7 V and more

Published

on

We’re starting to hit our stride in 2026. Now that February is here, our reviews team is flush with new devices to test, which means you’ve got a lot to catch up on if you haven’t been following along. Read on for a roundup of the most compelling new gear we’ve tested recently from gaming, PCs, cameras and more.

Nex Playground

Image for the large product module

Nex

The Nex Playground brings motion-tracked games to the entire family. Consider it the best of the Xbox Kinect in a tiny box.

Pros
Advertisement
  • Fun core titles
  • Solid motion-tracking
  • Well-designed hardware and UI
  • Large library of games
  • Works offline
Cons
  • Requires an ongoing subscription to access most games
  • Needs large open space for play

If you still have a fondness for the Xbox Kinect, the Nex Playground might be right up your alley. Senior reporter Devindra Hardawar recently put the tiny box through its paces and found an active gaming experience that’s fun for the whole family. “While I have some concerns about the company’s subscription model, Nex has accomplished a rare feat: It developed a simple box that makes it easy for your entire family to jump into genuinely innovative games and experiences,” he wrote.

MSI’s Prestige 14 Flip AI+

Image for the large product module

MSI

MSI’s Prestige 14 Flip AI+ is a remarkably powerful ultraportable, thanks to Intel’s Panther Lake chips. But it’s held back by a clunky trackpad and weak keyboard.

Pros
  • Excellent CPU performance
  • Solid gaming support
  • Bold OLED screen
  • Tons of ports
  • Relatively affordable
Cons
  • Awful mechanical trackpad
  • Dull-feeling keyboard
  • Display is limited to 60Hz

Devindra also tested MSI’s latest laptop, the powerful Prestige 14 Flip AI+. While the machine got high marks for its performance, display and connectivity, he noted that the overall experience is hindered by subpar keyboard and truly awful trackpad. “As one of the earliest Panther Lake laptops on the market, the $1,299 Prestige 14 Flip AI+ is a solid machine, if you’re willing to overlook its touchpad flaws,” he explained. “More than anything though, the Prestige 14 makes me excited to see what other PC makers offer with Intel’s new chips.”

Shokz OpenFit Pro

Image for the large product module

Shokz/Engadget

Advertisement

Finally, a set of open earbuds that actually sound good and provide noticeable ambient noise reduction.

Pros
  • Effective noise reduction
  • Comfy fit
  • Great sound for open earbuds
  • Dolby Atmos support
Cons
  • Sound quality varies with ear shape
  • Over ear hook isn’t for everyone
  • Noise reduction isn’t as effective as ANC

Fresh off of its Best of CES selection, I conducted a full review of the OpenFit Pro earbuds from Shokz. I continue to be impressed by the earbuds’ ability to reduce ambient noise while keeping your ears open. And the overall sound quality is excellent for a product that sits outside of your ears.

Sony A7 V

Image for the large product module

Sony/Engadget

With a new partially-stacked 33MP sensor, Sony’s A7 V offers speed, autofocus accuracy and the best image quality in its class.

Advertisement
Pros
  • Fast shooting speeds
  • Quick and accurate autofocus
  • Outstanding photo quality
  • Good video stabilization
Cons
  • Video lags behind rivals
  • Uncomfortable to hold for long periods

Contributing reporter Steve Dent has been busy testing cameras to start the year. This week he added the Sony A7 V to the list, noting the excellent photo quality and accurate autofocus. “The A7 V is an incredible camera for photography, with speeds, autofocus accuracy and image quality ahead of rivals, including the Canon R6 III, Panasonic S1 II and Nikon Z6 III,” he said. “However, Sony isn’t keeping up with those models for video.”

Apple AirTag (2026)

Image for the large product module

Apple/Engadget

Apple has improved its Bluetooth tracker in practically every way, making it louder and extending its detection range.

Pros
Advertisement
  • Precise Finding is far more useful
  • Louder and easier to hear
  • Same price as the original AirTag
Cons
  • Still lacks a keyring hole
  • Apple’s AirTag accessories are too expensive

Our first Editors’ Choice device of 2026 is Apple’s updated AirTag. All of the upgrades lead to a better overall item tracker, according to UK bureau chief Mat Smith. “There’s no doubt the second-gen AirTags are improved, and thankfully, upgrading to the new capabilities doesn’t come at too steep a cost,” he concluded.

Source link

Continue Reading

Tech

Daily Deal: The Ultimate AWS Data Master Class Bundle

Published

on

from the good-deals-on-cool-stuff dept

The Ultimate AWS Data Master Class Bundle has 9 courses to get you up to speed on Amazon Web Services. The courses cover AWS, DevOPs, Kubernetes Mesosphere DC/OS, AWS Redshift, and more. It’s on sale for $40.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Filed Under: daily deal

Source link

Advertisement
Continue Reading

Tech

OpenAI launches centralized agent platform as enterprises push for multi-vendor flexibility

Published

on

OpenAI launched Frontier, a platform for building and governing enterprise AI agents, as companies increasingly question whether to commit to single-vendor systems or maintain multi-model flexibility.

The platform offers integrated tools for agent execution, evaluation, and governance in one place. But Frontier also reflects OpenAI’s push into enterprise AI at a moment when organizations are actively moving toward multi-vendor architectures — creating tension between OpenAI’s centralized approach and what enterprises say they want.

Tatyana Mamut, CEO of the agent observability company Wayfound, told VentureBeat that enterprises don’t want to be locked into a single vendor or platform because AI strategies are ever-evolving. 

“They’re not ready to fully commit. Everybody I talk to knows that eventually they’ll move to a one-size-fits-all solution, but right now, things are moving too fast for us to commit,” Mamut said. “This is the reason why most AI contracts are not traditional SaaS contracts; nobody is signing multi-year contracts anymore because if something great comes out next month, I need to be able to pivot, and I can’t be locked in.”

Advertisement

How Frontier compares to AWS Bedrock

OpenAI is not the first to offer an end-to-end platform for building, prototyping, testing, deploying, and monitoring agents. AWS launched Bedrock AgentCore with the idea that there will be enterprise customers who don’t want to assemble an extensive collection of tools and platforms for their agentic AI projects. 

However, AWS offers a significant advantage: access to multiple LLMs for building agents. Enterprises can choose a hybrid system in which an agent selects the best LLM for each task. OpenAI has not made it clear if it will open Frontier to models and tools from other vendors.

OpenAI did not say whether Frontier users can bring any third-party tools they already use to the platform, and it didn’t comment on why it chose to release Frontier now when enterprises are considering more hybrid systems.

But the company is working with companies including Clay, Abridge, Harvey, Decagon, Ambience, and Sierra to design solutions within Frontier. 

Advertisement

What is Frontier

Frontier is a single platform that offers access to different enterprise-grade tools from OpenAI. The company told VentureBeat that Frontier will not replace offerings such as the Agents SDK, AgentKit, or its suite of APIs. 

OpenAI said Frontier helps bring context, agent execution, and evaluation into a single platform rather than multiple systems and tools.

OpenAI Frontier flow chart

“Frontier gives agents the same skills people need to succeed at work: shared context, onboarding, hands-on learning with feedback, and clear permissions and boundaries. That’s how teams move beyond isolated use cases to AI co-workers that work across the business,” OpenAI said in a blog post.

Users can connect their data sources, CRM tools, and other internal applications directly to Frontier, effectively creating a semantic layer that normalizes permissions and retrieval logic for agents built on the platform to pull information from. Frontier has an agent executive environment, which can run on local environments, cloud infrastructures, or “OpenAI-hosted runtimes without forcing teams to reinvent how work gets done.”

Built-in evaluation structures, security, and governance dashboards allow teams to monitor agent behavior and performance. These give organizations visibility into their agents’ success rates, accuracy, and latency. OpenAI said Frontier incorporates its enterprise-grade data security layer, including the option for companies to choose where to store their data at rest.

Advertisement

Frontier launched with a small group of initial customers, including HP, Intuit, Oracle, State Farm, Thermo Fisher, and Uber.

Security and governance concerns

Frontier is available only to a select group of customers with wider availability coming soon. Enterprise providers are already weighing what the platform needs to address.

Ellen Boehm, senior vice president for IoT and AI Identity Innovation at Keyfactor, told VentureBeat that companies will still need to focus their agents on security and identity. 

“Agent platforms like OpenAI’s Frontier model are critical for democratizing AI adoption beyond the enterprise,” she said. “This levels the playing field — startups get enterprise-grade capabilities without enterprise-scale infrastructure, which means more innovation and healthier competition across the market. But accessible doesn’t mean you skip the fundamentals.” 

Advertisement

Salesforce AI executive vice president and GM Madhav Thattai, who is overseeing an agent builder and library platform at his company, noted that no matter the platform, enterprises need to focus agents on value.

“What we’re finding is that to build an agent that actually does something at scale that creates real ROI is pretty challenging,” Thattai said. “The true business value for enterprises doesn’t reside in the AI model alone — it’s in the ‘last mile.’”

“That is the software layer that translates raw technology into trusted, autonomous execution. To traverse this last mile, agents must be able to reason through complexity and operate on trusted business data, which is exactly where we are focusing.” 

Source link

Advertisement
Continue Reading

Tech

TechEx Global returns to London with enterprise technology and AI execution

Published

on


London, TechEx Global 2026, one of Europe’s biggest enterprise technology conferences, brought thousands of technology professionals together at Olympia London on 4 and 5 February. The event went beyond buzzwords, focusing on how emerging technologies, especially AI, are being applied in real business contexts.  TechEx Global combines several co-located expos, including AI & Big Data, Cyber Security & Cloud, IoT Tech, Intelligent Automation, and Digital Transformation. Over 200 expert speakers and 150 exhibitors offered insights into how organisations are using digital tools to solve real problems and make decisions, not just generate answers.  From talk to execution One recurring theme…
This story continues at The Next Web

Source link

Continue Reading

Tech

IEEE Online Mini-MBA Helps Fill AI Skills Gaps

Published

on

Boardroom priorities are shifting from financial metrics toward technical oversight. Although market share and operational efficiency remain business bedrocks, executives also must now manage the complexities of machine learning, the integrity of their data systems, and the risks of algorithmic bias.

The change represents more than just a tech update; it marks a fundamental redefinition of the skills required for business leadership.

Research from the McKinsey Global Institute on the economic impact of artificial intelligence shows that companies integrating it effectively have boosted profit margins by up to 15 percent. Yet the same study revealed a sobering reality: 87 percent of organizations acknowledge significant AI skill gaps in their leadership ranks.

That disconnect between AI’s business potential and executive readiness has created a need for a new type of professional education.

Advertisement

The leadership skills gap in the AI era

Traditional business education, with its focus on finance, marketing, and operations, wasn’t designed for an AI-driven economy. Today’s leaders need to understand not just what AI can do but also how to evaluate investments in the technology, manage algorithmic risks, and lead teams through digital transformations.

The challenges extend beyond the executive suite. Middle managers, project leaders, and department heads across industries are discovering that AI fluency has become essential for career advancement. In 2020 the World Economic Forum predicted that 50 percent of all employees would need reskilling by 2025, with AI-related competencies topping the list of required skills.

IEEE | Rutgers Online Mini-MBA: Artificial Intelligence

Recognizing the skills gap, IEEE partnered with the Rutgers Business School to offer a comprehensive business education program designed for the new era of AI. The IEEE | Rutgers Online Mini-MBA: Artificial Intelligence program combines rigorous business strategy with deep AI literacy.

Rather than treating AI as a separate technical subject, the program incorporates it into each aspect of business strategy. Students learn to evaluate AI opportunities through financial modeling, assess algorithmic risks through governance frameworks, and use change-management principles to implement new technologies.

Advertisement

A curriculum built for real-world impact

The program’s modular structure lets professionals focus on areas relevant to their immediate needs while building toward comprehensive AI business literacy. Each of the 10 modules includes practical exercises and case study analyses that participants can immediately apply in their organization.

The Introduction to AI module provides a comprehensive overview of the technology’s capabilities, benefits, and challenges. Other technologies are covered as well, including how they can be applied across diverse business contexts, laying the groundwork for informed decision‑making and strategic adoption.

Rather than treating AI as a separate technical subject, the online mini-MBA program incorporates the technology throughout each aspect of business strategy.

Building on that foundation, the Data Analytics module highlights how AI projects differ from traditional programming, how to assess data readiness, and how to optimize data to improve accuracy and outcomes. The module can equip leaders to evaluate whether their organization is prepared to launch successful AI initiatives.

Advertisement

The Process Optimization module focuses on reimagining core organizational workflows using AI. Students learn how machine learning and automation are already transforming industries such as manufacturing, distribution, transportation, and health care. They also learn how to identify critical processes, create AI road maps, establish pilot programs, and prepare their organization for change.

Industry-specific applications

The core modules are designed for all participants, and the program highlights how AI is applied across industries. By analyzing case studies in fraud detection, medical diagnostics, and predictive maintenance, participants see underlying principles in action.

Participants gain a broader perspective on how AI can be adapted to different contexts so they can draw connections to the opportunities and challenges in their organization. The approach ensures everyone comes away with a strong foundation and the ability to apply learned lessons to their environment.

Flexible learning for busy professionals

With the understanding that senior professionals have demanding schedules, the mini-MBA program offers flexibility. The online format lets participants engage with content in their own time frame, while live virtual office hours with faculty provide opportunities for real-time interaction.

Advertisement

The program, which offers discounts to IEEE members and flexible payment options, qualifies for many tuition reimbursement programs.

Graduates report that implementing AI strategies developed during the program has helped drive tangible business results. This success often translates into career advancement, including promotions and expanded leadership roles. Furthermore, the curriculum empowers graduates to confidently vet AI vendor proposals, lead AI project teams, and navigate high-stakes investment decisions.

Beyond curriculum content, the mini MBA can create valuable professional networks among AI-forward business leaders. Participants collaborate on projects, share implementation experiences, and build relationships that extend beyond the program’s 12 weeks.

Specialized training from IEEE

To complement the mini-MBA program, IEEE offers targeted courses addressing specific AI applications in critical industries. The Artificial Intelligence and Machine Learning in Chip Design course explores how the technology is revolutionizing semiconductor development. Integrating Edge AI and Advanced Nanotechnology in Semiconductor Applications delves into cutting-edge hardware implementations. The Mastering AI Integration in Semiconductor Manufacturing course examines how AI enhances production efficiency and quality control in one of the world’s most complex manufacturing processes. AI in Semiconductor Packaging equips professionals to apply machine learning and neural networks to modernize semiconductor packaging reliability and performance.

Advertisement

The programs grant professional development credits including PDHs and CEUs, ensuring participants receive formal recognition for their educational investments. Digital badges provide shareable credentials that professionals can showcase across professional networks, demonstrating their AI competencies to current and prospective employers.

Learn more about IEEE Educational Activities’ corporate solutions and professional development programs at innovationatwork.ieee.org.

From Your Site Articles

Related Articles Around the Web

Advertisement

Source link

Continue Reading

Tech

GPT 5.3 Codex, OpenAI's new agentic coding model, helped create itself

Published

on


GPT-5.3 Codex merges the advanced coding capabilities of GPT-5.2 Codex with the reasoning and professional knowledge of GPT-5.2 into a single, unified model that is 25 percent faster than its predecessors. According to OpenAI, the model even contributed to its own development, as early versions were used to debug training processes,…
Read Entire Article
Source link

Continue Reading

Trending

Copyright © 2025