Apple’s iPhone 16 lineup is here. Though the base model iPhone 16 has taken a lot of the spotlight this year with the redesigned camera layout and amazing colors, the iPhone 16 Pro is a bit more iterative in terms of upgrades.
I personally bought an iPhone 16 Pro to upgrade from my iPhone 15 Pro. To be honest, though, if the iPhone 16 had a 1TB storage option, I would have gone that route. Since it doesn’t, I had to go with the iPhone 16 Pro, as much as I wanted a pink phone. But storage wasn’t the only reason — I also wanted the improved telephoto camera that I missed out on last year.
So, was going to the iPhone 16 Pro from the iPhone 15 Pro worth it? Let’s find out.
For one, the iPhone 16 Pro now has the 5x optical zoom that was previously only on the iPhone 15 Pro Max model last year, as Apple made the camera systems on both models equal this time.
Apple also improved the ultrawide camera on the iPhone 16 Pro, going to 48MP from the previous 12MP. This should mean better detail and resolution in your ultrawide shots, and macros can now be in full 48MP, too.
Advertisement
Though the main camera on both phones remains at 48MP, Apple rebranded the main camera on the iPhone 16 Pro to a “Fusion” camera rather than just “main” like it did before. Does this actually mean anything? We’ll see.
Since Apple made no improvements to the TrueDepth front camera (still 12MP and f/1.9 aperture on both), we’ll just be looking at the triple-lens camera system in this comparison. Ready? Let’s get started.
Again, the main camera on both the iPhone 16 Pro and iPhone 15 Pro is 48MP. But it’s now called the Fusion camera on the iPhone 16 Pro, rather than just the “main” camera like on the iPhone 15 Pro.
Let’s look at this image of a cute pumpkin carriage display at the Anaheim Majestic Garden Hotel. The most obvious difference between the two is the blue lights on the pumpkin carriage. With the iPhone 15 Pro image, the blue light bleeds into the orange of the pumpkin, making it look more blue than it actually is. The iPhone 16 Pro handles the light better, as the blue light doesn’t bleed out to the orange, and there’s more contrast.
The leaves at the bottom of the carriage and the green stem on top are also more vibrant in the iPhone 16 Pro, but the glittery leaf is more textured with the iPhone 15 Pro.
Advertisement
1.
iPhone 16 Pro
2.
iPhone 15 Pro
Here’s a cute little Halloween tree display at the hotel. Honestly, there isn’t a big difference between these two images. The iPhone 16 Pro may have captured a bit more of the tree detail in the shadows at the bottom of the tree (the silver specks) and have less bleeding for the lights, but the iPhone 15 Pro did a better job of making the colors at the top of the tree a tad more vibrant. Otherwise, they’re both pretty equal.
1.
iPhone 16 Pro
2.
iPhone 15 Pro
This is a fancy chef’s omakase plate I got for my wedding anniversary dinner at Hanagi Japanese Restaurant (highly recommend!) in Anaheim. Both images are very similar, but when you look closer, the iPhone 16 Pro is better. More of the nigiri sushi pieces are sharper and in focus, making it easier to see the texture. The color is also better with the iPhone 16 Pro, as evident with the tuna and salmon roe. But again, the differences are minimal unless you really scrutinize them.
1.
iPhone 16 Pro
2.
iPhone 15 Pro
This is a low-light shot featuring a serene little koi pond in the garden area of the Anaheim Majestic Garden Hotel. I took the photo around 8 p.m., and there were only a few of those lamps outside. Both images look good, but the iPhone 16 Pro is a bit more vivid with the color, especially the greenery in the background. Overall, they’re pretty equal.
This year, Apple made big improvements to the ultrawide camera, bumping it up to 48MP, which is what many flagship Android phones have nowadays. Apple also improved the sensors, which means it should be able to capture better ultrawide shots in low light. But does it really?
1.
iPhone 16 Pro
2.
iPhone 15 Pro
I used the ultrawide camera to capture the full spooky armor and dress display. Both photos look the same on the surface, aside from the slight difference in overall tone. The details are similar even when you zoom in to examine it closer. I was expecting more from the iPhone 16 Pro here, but that didn’t turn out to be the case.
1.
iPhone 16 Pro
2.
iPhone 15 Pro
This is a better case for the improvements to the ultrawide camera on the iPhone 16 Pro. I snapped this ultrawide shot of the koi pond at the hotel at night, and the iPhone 16 Pro version captured more light. The iPhone 16 Pro also handles the light better, as it doesn’t appear blown out like the iPhone 15 Pro image.
1.
iPhone 16 Pro
2.
iPhone 15 Pro
Here’s an ultrawide shot of Monstro on the Storybook Canals ride at Disneyland. Both iPhones handled the scene similarly, but the colors are a bit more vibrant in the iPhone 15 Pro image than the iPhone 16 Pro. It’s evident in the trees, the water, and Monstro himself. In terms of detail, both are about equal.
1.
iPhone 16 Pro
2.
iPhone 15 Pro
Now, let’s try some macro photos. Here’s a closeup of a flower’s pistil. The iPhone 16 Pro image is much clearer and brighter with the color. However, the iPhone 15 Pro version handled the contrast better, which I prefer a bit more. But as far as how everything is in focus, the iPhone 16 Pro takes the cake.
1.
iPhone 16 Pro
2.
iPhone 15 Pro
Let’s try another macro shot of a different flower. The difference between these two images is much more apparent. Once again, the iPhone 16 Pro version is crystal clear and in focus, while the iPhone 15 Pro image has a lot of distortion and blurriness. You can even see a bug on the bottom petal much more easily with the iPhone 16 Pro.
Last year, only the iPhone 15 Pro Max got the 5x optical zoom telephoto camera, as the iPhone 15 Pro had just up to 3x. But this year, Apple made the two Pro models equal in terms of camera features, so does that 5x optical zoom really make that much of a difference?
Advertisement
1.
iPhone 16 Pro
2.
iPhone 15 Pro
Here’s a 5x zoomed-in shot of some buildings I can see across the street from the park. Since the iPhone 15 Pro uses a digital crop for its 5x zoom, the loss of detail is pretty clear when you look closely at it. For example, the texture in the wall of the beige townhomes is barely visible, whereas you can clearly see it with the iPhone 16 Pro’s 5x zoom. Other details, like the tree, also appear soft in the iPhone 15 Pro, while they’re clear with the iPhone 16 Pro.
1.
iPhone 16 Pro
2.
iPhone 15 Pro
I snapped another quick 5x zoom shot of some palm trees in my neighborhood. Both images look similar, but if you look a little closer, you’ll be able to see the sharpness of the leaves in the iPhone 16 Pro version, whereas they appear softer in the other. It looks like the iPhone 15 Pro also made the sky appear a more vibrant blue, which you may or may not prefer.
1.
iPhone 16 Pro
2.
iPhone 15 Pro
A half-moon was still out this morning, so I decided to try to see how well the zoom on both iPhones would do with it. Of course, the 5x zoom doesn’t give you a ton of detail of the moon, but you can at least make out the moon’s surface as best you can on the iPhone 16 Pro. With the iPhone 15 Pro, it’s much more fuzzy and harder to make out the different surface shades.
1.
iPhone 16 Pro
2.
iPhone 15 Pro
This is an interesting one. I decided to try a 3x zoom image since that’s the maximum optical zoom range for the iPhone 15 Pro. Since the iPhone 16 Pro only has 2x or 5x optical zoom, but up to 25x digital zoom, it uses digital zoom for 3x. For the iPhone 15 Pro, it has 3x optical zoom but not 5x. So this time, the tables have turned — the iPhone 16 Pro’s digital 3x zoom is not great compared to the iPhone 15 Pro’s 3x optical zoom. The left side of the rose garden looks dull and lifeless on the iPhone 16 Pro but is vibrant and crisp on the iPhone 15 Pro.
If you’re still using an iPhone 15 Pro, this isn’t a recommendation to replace it right now with an iPhone 16 Pro. While the cameras are an improvement over last year, it’s still a pretty iterative upgrade, and unless you really care about the tiny details, it’s probably not worth it (for most people).
However, if you really want the 5x optical zoom that was missing last year and you enjoy taking ultrawide and macro shots, then the iPhone 16 Pro is worth considering. But for the main camera, which is likely to be the one that most people use the most, there’s very little difference, and not enough to justify the money to upgrade.
So, what’s the conclusion? If the telephoto and ultrawide cameras are your top priority, there’s a case for upgrading. But if you can do without those upgrades, and the main camera is your main concern, you can safely sit this one out.
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
While many existing risks and controls can apply to generative AI, the groundbreaking technology has many nuances that require new tactics, as well.
Models are susceptible to hallucinations, or the production of inaccurate content. Other risks include the leaking of sensitive data via a model’s output, tainting of models that can allow for prompt manipulation and biases as a consequence of poor training data selection or insufficiently well-controlled fine-tuning and training.
Ultimately, conventional cyber detection and response needs to be expanded to monitor for AI abuses — and AI should conversely be used for defensive advantage, said Phil Venables, CISO of Google Cloud.
Advertisement
“The secure, safe and trusted use of AI encompasses a set of techniques that many teams have not historically brought together,” Venables noted in a virtual session at the recent Cloud Security AllianceGlobal AI Symposium.
Lessons learned at Google Cloud
Venables argued for the importance of delivering controls and common frameworks so that every AI instance or deployment does not start all over again from scratch.
“Remember that the problem is an end-to-end business process or mission objective, not just a technical problem in the environment,” he said.
Nearly everyone by now is familiar with many of the risks associated with the potential abuse of training data and fine-tuned data. “Mitigating the risks of data poisoning is vital, as is ensuring the appropriateness of the data for other risks,” said Venables.
Advertisement
Importantly, enterprises should ensure that data used for training and tuning is sanitized and protected and that the lineage or provenance of that data is maintained with “strong integrity.”
“Now, obviously, you can’t just wish this were true,” Venables acknowledged. “You have to actually do the work to curate and track the use of data.”
This requires implementing specific controls and tools with security built in that act together to deliver model training, fine-tuning and testing. This is particularly important to assure that models are not tampered with, either in the software, the weights or any of their other parameters, Venables noted.
“If we don’t take care of this, we expose ourselves to multiple different flavors of backdoor risks that can compromise the security and safety of the deployed business or mission process,” he said.
Advertisement
Filtering to fight against prompt injection
Another big issue is model abuse from outsiders. Models may be tainted through training data or other parameters that get them to behave against broader controls, said Venables. This could include adversarial tactics such as prompt manipulation and subversion.
Venables pointed out that there are plenty of examples of people manipulating prompts both directly and indirectly to cause unintended outcomes in the face of “naively defended, or flat-out unprotected models.”
This could be text embedded in images or other inputs in single or multimodal models, with problematic prompts “perturbing the output.”
“Much of the headline-grabbing attention is triggering on unsafe content generation, some of this can be quite amusing,” said Venables.
Advertisement
It’s important to ensure that inputs are filtered for a range of trust, safety and security goals, he said. This should include “pervasive logging” and observability, as well as strong access control controls that are maintained on models, code, data and test data, as well.
“The test data can influence model behavior in interesting and potentially risky ways,” said Venables.
Controlling the output, as well
Users getting models to misbehave is indicative of the need to manage not just the input, but the output, as well, Venables pointed out. Enterprises can create filters and outbound controls — or “circuit breakers” —around how a model can manipulate data, or actuate physical processes.
“It’s not just adversarial-driven behavior, but also accidental model behavior,” said Venables.
Advertisement
Organizations should monitor for and address software vulnerabilities in the supporting infrastructure itself, Venables advised. End-to-end platforms can control the data and the software lifecycle and help manage the operational risk of AI integration into business and mission-critical processes and applications.
“Ultimately here it’s about mitigating the operational risks of the actions of the model’s output, in essence, to control the agent behavior, to provide defensive depth of unintended actions,” said Venables.
He recommended sandboxing and enforcing the least privilege for all AI applications. Models should be governed and protected and tightly shielded through independent monitoring API filters or constructs to validate and regulate behavior. Applications should also be run in lockdown loads and enterprises need to focus on observability and logging actions.
In the end, “it’s all about sanitizing, protecting, governing your training, tuning and test data. It’s about enforcing strong access controls on the models, the data, the software and the deployed infrastructure. It’s about filtering inputs and outputs to and from those models, then finally making sure you’re sandboxing more use and applications in some risk and control framework that provides defense in depth.”
Advertisement
VB Daily
Stay in the know! Get the latest news in your inbox daily
Buying a home has always been complicated. You have to figure out how much money to put down and how that down payment will affect a monthly mortgage bill. Then there are the closing costs and fees. Kevin Bennett launched Further to try to help make the financial process easier to navigate — especially for first-time buyers.
Further is a fintech platform that walks users through the financial side of home buying. The company’s first product, which goes live Friday, is a calculator that shows what people can afford and what their monthly mortgage payments and closing costs could look like, among other metrics based on real-time interest rates.
Unlike other mortgage calculators that you can find on Zillow and LendingTree, Further looks to give users more than the numbers. It tells users how easy it will be for them to find a loan based on their financial status, whether they should wait to buy, or if they should pursue specific types of loans based on their financial profile, among others.
The platform is currently free to use. The company plans to monetize once it releases more product developments but declined to share details.
Advertisement
“A generation ago, our parents bought a $200,000 home with a 20% mortgage, and it was very straightforward,” Bennett said. “There was one kind of mortgage, and that’s what you did and it’s just more complicated. There are lots of kinds of mortgages. There are lots of implications. Homes are much more expensive now, so there’s just a lot more complexity, and it’s a much bigger financial decision.”
Last year Bennett found himself looking for something new to work on after stepping back from Caribou, the auto loan refinancing startup he co-founded in 2016 and where he served as CEO. He knew he wanted to do something else mission-oriented but wasn’t sure where.
He started looking into real estate, a category he said he’s always been fascinated with. The fact that his whole family works in real estate helped, too. He started talking to folks who had purchased their home within the last two years and found a lot of common pain points: People didn’t understand the process and were relying on homemade spreadsheets to try to figure out what they could afford.
Bennett also had a personal experience: He bought and sold a townhouse in his 20s and was surprised to find out he endured a $30,000 loss, despite selling the home for the original purchase price. That’s because he missed out on certain home improvements that could’ve increased the house’s value.
Advertisement
“You can’t hit the undo button once you buy that house,” Bennett said. “It felt like there was a gap in the market. It felt like it was a lot more complicated than it was a generation ago.”
He reached out to his friend Chris Baker, a real estate expert, and former head of product at EasyKnock, about his idea last year. The pair got to work fast. Their first conversation was November 3, 2023. They decided to work together in January, launched the product in April, and raised an undisclosed pre-seed round in June. Now, they are coming out of stealth.
“Our goal is to take care of the complicated jargon and stuff and really help you understand as easily as possible what it is you need to know, with transparency, obviously, but also putting you in the driver’s seat and in control,” he said.
The company’s previously undisclosed pre-seed round raised $4.1 million from investors including Link Ventures, Vesta Ventures, and Fidi Ventures, among others. Bennett said that fundraising wasn’t too challenging, as half of the capital the company raised was from investors who backed him while he was at Caribou. Bennett thinks his track record as a founder made a big difference. The company built its cap table intentionally to include angel investors who have experience in the real estate market, he said.
Advertisement
This kind of financial information and guidance seems like something a Zillow or Redfin would be ripe to copy especially considering Zillow already offers a mortgage calculator and some advice of its own. But Bennett said he wasn’t super concerned about the competition. He said he thinks that many companies either fall on the proptech side or the fintech side and rarely in the middle, as Further does, which gives it more of a moat.
But Further is definitely not the only company that sits between proptech and fintech that is aimed at consumers. Online mortgage startup Better.com, which allows consumers to browse for mortgage options or refinance an existing one, is a good example.
It will likely depend on what Further unveils in its planned Q1 product release that will include more features and capabilities, but Bennett didn’t share too many details just yet. For now, users can use Further to get an idea of what they can afford and what they can expect to pay when buying a house.
“My hope is that we can enable people with the right insights and information to make good decisions and plan for this really big part of their life in a way that gives them confidence, puts them at ease and and lets them focus on, you know, what they really want to focus on, which is kind of that that dream of being a homeowner,” Bennett said.
A newly proposed cosmic speed limit may constrain how fast anything in the universe can grow. Its existence follows from Alan Turing’s pioneering work on theoretical computer science, which opens the intriguing possibility that the structure of the universe is fundamentally linked to nature of computation.
Cosmic limits are not a new idea. While studying the relationship between space and time, Albert Einstein showed that nothing in the universe can exceed the speed of light, as part of his special theory of relativity. Now, Toby Ord at…
The addition of a 4.3-inch color TFT screen makes the new Wyze Scale Ultra one of the brand’s most expensive smart scales to date, but at $43.99, it’s still considerably cheaper than offerings from companies like Withings. It’s available from Wyze directly or from Amazon in white or black.
The Wyze Scale Ultra says it can track 13 different health metrics, including your heart rate, your metabolic age (a comparison of how your body burns calories at rest to others your age), and measurements of fat, muscle, and water.
Previous versions of Wyze’s smart scales featured simple segmented LED displays to display basic information like weight, BMI, and muscle mass, leaving more detailed breakdowns of your health metrics for an accompanying mobile app. The Wyze Scale Ultra can display more data, including how measurements like weight or body fat have fluctuated over time, and it’s customizable, so it only displays what you want it to.
The information displayed on the Wyze Scale Ultra’s full color screen can be customized by each user.Image: Wyze
Advertisement
Like the Wyze Scale X introduced in 2022, the Scale Ultra offers modes for easily weighing pets, babies, or luggage and a pregnancy mode that turns off the weak electrical current used for bioelectric impedance analysis (BIA) as an added safety precaution.
The Wyze Scale Ultra can also be used to weigh pets, children, and luggage.Image: Wyze
Connectivity includes both Bluetooth and Wi-Fi, and the Wyze Scale Ultra can automatically recognize and sync measured health metrics for up to eight different users — either to its mobile app or to the Apple Health, Google Fit, and Fitbit platforms. It’s not rechargeable, however. It runs on four AA batteries, which Wyze says will keep the scale powered for up to nine months.
The outcome of the U.S. presidential election on Nov. 5 won’t affect oil production levels in the short- to medium term, Exxon CEO Darren Woods told CNBC on Friday.
Former President Donald Trump has called for unconstrained oil and gas production to lower energy prices and fight inflation, boiling his energy policy down to three words on the campaign trail: “Drill, baby, drill.”
“I’m not sure how drill, baby, drill translates into policy,” Woods told CNBC’s “Squawk Box” Friday after the largest U.S. oil and gas company reported third-quarter results.
Woods said U.S. shale production does not face constraints from “external restrictions.” The U.S. has produced record amounts of oil and gas during the Biden administration.
Over the past six years, the U.S. has produced more crude oil than any other nation in history, including Saudi Arabia and Russia, according to the Energy Information Administration.
Advertisement
Output in the U.S. is driven by the oil and gas industry deploying technology and investment to generate shareholder returns based on the break-even cost of production, the CEO said.
“Certainly we wouldn’t see a change based on a political change but more on an economic environment,” Woods said. “I don’t think there’s anybody out there that’s developing a business strategy to respond to a political agenda,” he said.
While shale production has not faced constraints on developing new acreage, there are resources in areas like the Gulf of Mexico that have not opened up due to federal permitting, the CEO said.
“That could, for the longer term, open up potential sources of supply,” Wood said. In the short- to medium term, however, unconventional shale resources are available and it’s just a matter of developing them based on market dynamics, he said.
Advertisement
Exxon Mobil shares in 2024.
The vast majority of shale resources in the U.S. are on private land and regulated at the state level, according to an August note from Morgan Stanley. About 25% of oil and 10% of natural gas is produced on federal land and waters subject to permitting, according to Morgan Stanley.
Vice President Kamala Harris opposed fracking during her bid for the 2020 Democratic presidential nomination. She has since reversed that position in an effort to shore up support in the crucial swing state of Pennsylvania, where the natural gas industry is important for the state’s economy.
LiteSpeed Cache, an immensely popular WordPress plugin for site performance optimization, suffered from a vulnerability which allowed threat actors to gain admin status.
With such elevated privileges, they would be able to perform all sorts of malicious activities on the compromised websites.
According to researchers from Patchstack, the vulnerability was discovered in the is_role_simulation function, and it is relatively similar to a different vulnerability that was discovered last summer. The function apparently used a weak security hash check that could be broken with brute force, granting the attackers the ability to abuse the crawler feature and simulate a logged-in administrator.
Who is vulnerable?
There are a few factors that need to align before the vulnerability can be abused, though.
That includes having the crawler turned on, with run duration between 2500 and 4000, and the intervals between runs being set to 2500- 4000. Furthermore, Server Load Limit should be set to 9, Role Simulation to 1 (ID of user with admin role), and Turn every row to OFF except Administrator should be activated.
Advertisement
The vulnerability is now tracked as CVE-2024-50550, and has a severity score of 8.1 (high severity). It was already patched, with the version 6.5.2 of the plugin being the earliest clean one. LiteSpeed Cache is one of the most popular plugins of its kind, with more than six million active installations.
There is no talk of any evidence of in-the-wild abuse, so chances are cybercrooks have not picked up on the vulnerability in the past.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
However, now that the patch is public, it’s only a matter of time before they start scanning for vulnerable websites. Currently, almost three-quarters (72.1%) of all LiteSpeed Cache websites are running the latest version, 6.5, with 6.7% running 6.4, and a notable 21.2% running “other” versions. Therefore, at least 27.6% of sites could be targeted, which is more than 1.6 million.
You must be logged in to post a comment Login