Octobers excite us at Halide HQ. Apple releases new iPhones, and they’re certain to upgrade the cameras. As the makers of a camera app, we tend to take a longer look at these upgrades. Where other reviews might come out immediately and offer a quick impression, we spend a lot of time testing it before coming to our verdict.
This takes weeks (or this year, months) after initial reviews, because I believe in taking time to understand all the quirks and features. In the age of smart cameras, there are more quirks than ever. This year’s deep dive into Apple’s latest and greatest — the iPhone 13 Pro — took extra time. I had to research a particular set of quirks.
“Quirk”? This might be a bit of a startling thing to read, coming from many reviews. Most smartphone reviews and technology websites list the new iPhone 13 Pro’s camera system as being up there with the best on the market right now.
Advertisement
I don’t disagree.
But I must admit I don’t take photos like most people. An average iPhone user snaps a picture in Apple’s Camera app, and… I work on my own camera app. I take photos in both Apple’s app and our own — and that lets me do something that Apple’s can’t: take native RAW photos. These shots let me poke and prod at the unprocessed photo that comes straight out of the hardware. Looking at the raw data, I’ve concluded that while Apple has taken more than one leap forward in hardware, they’re in a tricky position in software.
The Importance of Processing
When you take a photo on a modern iPhone — or any smartphone for that matter — you might like to think that what you saw was what you captured, but nothing could be further from the truth.
The zero’s and one’s that your sensor sees would mean nothing to the human eye. They require interpretation. For example, some of the colors your camera sees can’t be represented on your screen, so it needs to find something close. Some bits of processing are creative, like adding contrast to make things “pop,” while other decisions are to compensate for the weaknesses of the hardware, like noise.
Advertisement
Consider this underprocessed iPhone photo:
This noisy shot didn’t come from an iPhone 5— this is from an iPhone 13 Pro. This image, which is a ‘raw’ capture, is much noisier than what you’d get from a dedicated, full-size camera. Why? Physics.
Consider this series showing the evolution of Canon’s cameras over more than half a century:
You’ll notice that while technologies come and go, and even the medium changes (this camera, while externally similar, moved from 35mm film to digital), the camera stayed a similar shape, and most importantly, size.
Technology always strives for miniaturization. Your iPhone is enabled by smaller and denser chips with more power than the large desktop computers of decades ago; your iPhone screen is a higher resolution than most TVs, packed into a tiny 5 inch size, and your camera, too, is only a fraction of the size of a digital camera from years past.
Unfortunately, cameras are limited by the laws of physics. A larger lens can collect more light and produce a ‘depth of field’ that we find appealing in portraiture. A larger sensor means less noise and more detail. An inconvenient truth of phone photography is that it’s impossible to make the camera smaller without losing quality. But smartphones have a powerful advantage over their big brothers: the magic of processing. Today, the most cutting edge research (including our own at Halide) in photography is in an area called Computational Photography.
Advertisement
Putting the ‘Smart’ into Smartphone Photography
Around the time of iOS 4 (yes, twelve years ago), Apple introduced an ‘HDR’ option to their camera app to address the most common technical challenges in photography: capturing really bright and really dark stuff at the same time.
When taking a photo, clouds in the sky get so bright that the camera only sees a white shape. If you turn down that brightness, you’ll the shadows turn black, losing details. While the human eye can see both the clouds and shadows at the same time, an iPhone 4’s sensor has less “dynamic range.”
In fact, this “high dynamic range” problem has existed since the early days of photography. Experienced photographers dealt with it by taking multiple photos of different exposures and patching them together. iOS 4 solved it with an HDR mode you could toggle on and off. This toggle was important because…
Automatic edits on photos can go wrong. When there are object in motion, the ‘merging’ of photos creates artifacts, or ‘ghosting.” This all worked out with smarter algorithms, more powerful chips, faster memory, and an iPhone that could simply take photos so fast that there were fewer gaps in photos.
Fast forward to today, and your iPhone goes way above and beyond HDR. It has not been a setting you can toggle for a while. When you take a photo now, the camera on the iPhone will merge many shots to get your final result. Today, your camera essentially always ‘edits’ your photos for you. And exactly how it edits them… is a bit of a mystery.
Apple, famous for its secrecy, doesn’t divulge their secret editing sauce. We know it brightens faces while retaining texture in them, it smoothes the sky and brings out color and clarity in the landscape, and in low light, it can smooth over noise while keeping the details of a sweater intact. It’s a miracle of engineering, pulled off in the blink of an eye, thanks to in-house chips optimized for these processes.
Advertisement
It’s safe to say that most camera users benefit from this. While these ‘edits’ can be accomplished by experienced photographers, experienced photographers make up less than 1% of iPhone users. In practice, that means that these edits are part of the camera. To review an iPhone camera’s for most people, the computational processes are as important (if not more important) to assess as the hardware.
What’s A Camera?
And with that, you can see why it is becoming increasingly important to define what we refer to when we talk about a ‘camera’. If I talk about the camera you’re holding, I could be talking about the physical hardware — the lens, the sensor, and its basic operating software in the case of a digital camera — or I could be talking about the package. The hardware with its advanced, image merging, hyper-processing software.
Smartphone cameras really should be judged by that package. The software has become such a disproportional part of the image quality that we can no longer separate the two; if we do, the resulting image is often less than useful. But it’s not purely qualitative: Choosing a lens or a type of film stock can be a creative choice. With smartphones, choosing whether or not to trust in the computational magic or not is rapidly becoming one, as well.
While all the shots in this review come from an iPhone 13 Pro — and the results seem even better than all previous generations— only some of these photos omit the processing of Apple’s Camera software.
Advertisement
As this software begins to make more and more creative decisions for us, and we are able to opt out of it, we should judge it as critically as we would any other component of the camera. And that’s exactly what I will be doing in this review.
This year’s iPhone 13 Pro saw upgrades across every bit of camera hardware, save for the front-facing (aka selfie) camera. Let’s tackle them one by one.
The 26mm Wide Camera
The iPhone’s primary camera, or ‘wide’, has gotten a larger sensor and a ‘faster’ lens, which means it lets in more light. This allows for shots with less noise in low light, even before the system applies its processing.
Its wide angle continues to be the most versatile, so it makes sense that it’s the go-to camera for most shots. It’s reasonable, then, that Apple continues to invest in making it the best camera in the array. Every iPhone generation sees it improve.
Advertisement
Here are some comparisons against previous generations: the iPhone X, 11 Pro, and 13 Pro. I shot in ‘native’ RAW, which doesn’t apply any smart processing, and cropped in to highlight the details.
It’s harder to make out details on the iPhone X, as it exhibits a lot more noise. While the jump from the X to the 11 is noticeable, the move to the 13 Pro is much less so, despite having a faster lens and larger sensor. It’s possible there is a lot more detail that the sensor and lens can resolve, but we can’t really tell — possibly it’s because the sensor resolution has been the same 12 megapixels since the iPhone 6S, which launched seven years ago.
The iPhone’s go-to 12 megapixel resolution has not been a particularly limiting factor to me, a pretty hardcore iPhone photographer, for the last few years, but we can start to see diminishing returns as Apple sinks such tremendous investments into the camera hardware and processing.
Don’t be surprised if the next iPhone improves the resolution — to, say, a 48 megapixel sensor. Perhaps one reason that Apple has held out is that such a bump in resolution would require 4x the processing power to perform the same computational magic. That’s a lot more data to process in the blink of an eye.
Advertisement
The 13mm Ultrawide Camera
2019’s iPhone 11 added a new trick to our camera bag: the ultra-wide camera. Its super-super wide GoPro-like field of view allows for dramatic captures, and saves us from difficult decisions around what gets cropped in a shot.
The biggest challenge and tradeoff in ultra-wide cameras is distortion. As the image reaches the edges of the frame, the lines start to curve, distorting geometry and shapes. Most are familiar with so called ‘fisheye’ lenses — they produce very “round” images.
The ultra-wide in the iPhone still produces a ‘square’ shot, but at the edges things can start to look… a little weird:
My bike is looking odd here — as the wheel reaches the end of the frame, it is distorted.
Post-processing produces a less-distorted image, but sometimes you might enjoy the effect. Shortly after the iPhone 11 launch, tinkerers found a way to disable these corrections to see what the image looks like before processing:
Not Marques‘ best angle.
This extreme example shows the importance of ultra-wide lens corrections. This camera relies more on processing than any of the other cameras to produce usable images.
In the iPhone 11 and 12, I found it useful to have the ultra-wide in my pocket— way more useful than the iPhone’s Panorama Mode— but I still mostly avoided it. It was ‘fixed focus’, which means there is no way for the lens to adjust what is sharp. It was designed so all of the frame was in focus. This caused smudgy images, trading clarity for a wider field of view. I never felt like it produced great shots, particularly when compared to the excellent Wide camera.
The iPhone 13 Pro addresses all of this. Without exaggeration, this might be the most significant jump in an iPhone camera since the iPhone 3GS added video.
The ultra-wide’s sensor is significantly larger, the lens aperture is much wider, and the lens can now change focus! It’s still a smaller sensor than on the Wide, and it still distorts and softens an image at the edges, but overall, it creates spectacular shots with pretty good sharpness. This is no mere upgrade, this is a whole new camera.
Advertisement
You can take a shot from your point-of-view and even get your legs in it, and it’ll be sharp:
Taking a break from the Deus Swank Rally di Sardegna. The frame is sharp almost all the way to the far edges, which is impressive for a lens with a full-frame equivalent focal length of 13mm.
Thanks to having real depth of field, you can now take separate your subject from its background. Its larger sensor and faster aperture gives you real background blur (‘bokeh’) without the need for Portrait mode!
Real bokeh! On the ultra-wide lens? That’s new.
Oh, and that adjustable focus unlocks a new superpower…
Macro
One extra bonus we got thanks to the ability for the camera to alter its focus is a borderline bizarre close-focus distance. This new camera module can come up to about half an inch of a subject and still render it sharp.
While this allows incredible fresh perspectives at the world, it’s even cooler when applying some magic to it. An existing macro-capable camera is one thing, but as a camera app developer we couldn’t help but push it a bit further. When we released an update to our app with fine-grained focus control and AI upscaling, we found that it can also be applied to this already-Macro-capable camera, creating a sort of microscope:
This new camera package is supremely powerful, and it has incredible potential with smart processing as we show here with a feature like Neural Macro.
Advertisement
This seems like a slam dunk of a camera upgrade, yet most reviewers ran into issues with this camera. And the issue they had was with the camera being ‘smart’.
In The Switch
We can conclude that this iPhone packs a hugely powerful set of cameras in its Wide and Ultrawide cameras, but to your average user, there is only one camera; the one they shoot with. The iPhone’s camera experience is cleverly designed like this: it was carefully crafted to eliminate the complexity from a traditional multi-lens photography setup.
As an experienced photographer, if I pack three lenses for my big camera, I’m going to consciously choose between them. I think about the tradeoffs between each lens, like edge-blurriness vs field-of-view. iPhone’s camera app wants to make that decision for you. You don’t open up your iPhone camera and choose a lens; it just works. The iPhone will choose a lens, even if you don’t. And it can even switch between them, whether you choose to… or not.
When this intelligent switching works, it’s like magic. The camera behaves and works better than a set of separate cameras. You don’t pick between them*; the Camera app has been programmed with incredible intelligent adjustment to ‘seamlessly’ switch between them. This is how you can zoom with this big wheel all the way from a 13mm to a 75mm lens:
Advertisement
*unless you use Halide or other apps that enforce a more strict choice between cameras
There are two ways this can work: it can work like true magic, where the camera will behave and work better than a set of separate cameras, and there are times where this can confuse with its ‘intelligent’ switching, creating a jarring transition when users have no idea why it is happening.
For example, on previous iPhones with dual camera systems, each camera had limitations in focus. Your wide angle lens (1×) could focus closer than the telephoto one (2×). So if you tried to take a photo of something fairly close, the iPhone always took the photo with the wide angle lens and cropped in, even if you had picked 2× in the app. It’s hard to argue with this decision, as it’s better to have a lower resolution photo than one out of focus.
But it also confused Halide users who wrote in to ask why we can’t focus on objects as close as the first-party camera. If you truly force one lens, you’ll discover that it has limitations. We had to break the news to them, like parents with older kids having a talk about Santa: that ‘close focusing telephoto’ was not real. Cover the telephoto camera with your finger, and you will find that it still somehow worked. What was this dark magic?
Advertisement
It was a made-up camera that Apple created virtually to fool us all.
And that brings us to that little ‘Macro-gate’. Previously, it never really made sense to switch to that ultra wide camera. Now that it can, various people tweeted and many reviewers were flummoxed to see their camera jump erratically when focusing close between points of view:
The cameras on the rear of the phone are closely spaced together, and when they are focusing on something nearby, switching between them creates a ‘jump’ in the image. This can’t be fixed easily with software or cropping; it’s a concept known as parallax. If you look at your nose and close one of your eyes, and then the other, you can see your nose jumping around. The illusion of the ‘one camera’ is broken, and no amount of processing can fix this.
The source of friction here is that users know what they want, but in an ultra-simple interface the ‘smarts’ of image processing and camera selection have to predict it. If the system assumes correctly, great. But the more complex this system becomes, the more you run into times where a seamless transition can start feeling like a choice that is being made on your behalf.
Advertisement
In Halide, we simply have a Macro mode. You toggle it to jump to the closest focusing lens (and we throw in some AI enhancement magic too). Apple avoids this sort of complexity in their camera app, but relented in an iOS 15 update, making it a setting.
Don’t get us wrong: we still think their goal— magical and transparent switching— is the best for almost all users. But Apple runs into an unenviable challenge — a camera app that works for users of all skill level. We design Halide to serve slightly more experienced photographers, all the way up to seasoned pros. Apple wants to serve the entire population of Earth, from pros to your parents. This requires much more advanced ‘magic’ to bridge these gaps.
Unfortunately, this is a place where the ‘magic’ illusion failed and instead started to get in the user’s way.
The 75mm Telephoto Camera
And that brings us to the final camera and my personal favorite: the telephoto camera. Introduced in the iPhone 7 Plus, the telephoto was always a fantastic way to get a closer shot from a distance, and longer focal lengths are particularly great for portraits and artistic photography.
77mm iPhone 13 Pro captures
Apple made a small step to this major leap in the iPhone 13 Pro with last year’s iPhone 12 Pro Max. Instead of its smaller 12 Pro sibling, the Pro Max came with a 2.5x (65mm equivalent) telephoto lens, sacrificing a little bit of light for a little bit of reach. While I liked it, I found it awkward overall. It wasn’t enough reach for me to really enjoy, and I missed having the extra sharpness at 2x. The step was just too small.
Advertisement
No more half measures: iPhone 13 Pro and its larger brother, the iPhone 13 Pro Max, pack 3x (77mm) lenses. Unfortunately, however, Apple did not bless this camera with the upgrade of all the other cameras. Its sensor remains disappointingly the same size. This is a serious problem.
Virtually all telephoto lenses make a serious sacrifice: light. For example, on the iPhone 12 Pro, the wide angle lens had an ƒ/1.6 aperture, its telephoto had a narrower ƒ/2.0, allowing in less light. Less light translates into more noise and/or motion blur, making it hard to get a sharp shot.
This year, this gap is even greater. The iPhone 13 Pro’s wide angle camera has an ƒ/1.5 aperture, while the telephoto has an ƒ/2.8, for even less light.
What this means is that the camera that was lacking a bit in terms of clarity, noise and sharpness in previous iPhones has gotten… a bit worse.
Advertisement
Yes, the tradeoff was clear: we get reach. In bright daylight, you do get some fantastically sharp shots you previously simply could not get.
This is 77mm reach. Check out the motorcyclists in the bottom left of the frame. With plentiful sunlight, this shot came out quite sharp.
But it’s also clear that there’s more processing than ever happening in these photos:
A zoomed-in shot in San Francisco shows a ‘watercolor’-like effect, where the image feels very heavily smoothed over to reduce the noisiness that is caused by its rapid captures to offset the long lens and small sensor combination.
This is the nature of zoom lenses and shutter speeds. When you shoot handheld, the more you zoom, the more susceptible you are to motion blur due to the subtle movements of your arm holding the phone. As a rule of thumb, when shooting handheld you should set your shutter speed to twice the focal length of the lens. The telephoto camera on the iPhone 12 Pro Max had a 65mm focal length, so the math worked out to 1/130th of a second. With the iPhone 13 Pro’s 77m, that works out to 1/144th of second. To keep track, that’s three times more than the ‘regular’ Wide camera.
1/144th of a second is also less time of exposing, or letting in light. Less light makes it challenging to get a great shot.
The iPhone does have one tool to help it get the shot in that short moment. It can make its camera sensor more sensitive. What ends up happening is that your iPhone needs to crank up its ISO level, which translates into more noise. That means the iPhone’s post processing is going to include more noise reduction than ever before, because the shots are so noisy:
This ‘native’ RAW file shows noise in the image of an iPhone 13 Pro’s telephoto lens. Normal Camera app exposures would never show this kind of noise; this is the image before it goes through Apple’s proprietary steps of reducing noise and enhancing detail.
In my testing, a lot noisier than the iPhones that came before it.
A regression in noise quality between iPhones isn’t new. There have been iPhone camera generations with heavy, noticeable processing: the iPhone XS notably had some confused reactions at launch from people who thought their skin was being smoothed by the new Smart HDR process, when in reality it was just merging images for a better shot. The reason for the excessive smoothing? The cameras were producing better shots, but with more noise. Increased processing per-shot required more exposures, and the only way to capture extra shots was with more sensitivity.
Advertisement
This phenomenon has returned in the iPhone 13 Pro. This time your camera applies a “beauty filter” to leaves on a tree.
At sunset, noise starts to really bleed into iPhone 13 Pro’s telephoto shots in the RAW files. If shooting with Apple’s Camera app, JPGs and ProRAW files will look ‘painted over’ as harsh noise reduction does its work, with admittedly some impressive detail retained.
To make matters worse, the transparent switching between cameras rears its head once more here. As the telephoto camera is much less sensitive to light, the Camera app is very conservative in assessing when its image is clear enough to be usable.
That means many shots, even in daylight, actually come from the wide (1×) camera, and then cropped by the camera app.
Two ProRAW images taken in the iPhone’s stock camera app. On the second shot, I tapped ‘3x’ and captured a second image.
While there is an admirable amount of detail recovered here, the second shot here is not taken by my telephoto lens — it’s just a crop of the regular Wide lens, chosen because the lack of light at night makes the telephoto lens less usable for quick handheld shots.
Cropping on the second shot shows that this is just an upscaled image from the Wide camera, not a true telephoto capture. This creates a ‘smoothed over’, low-detail look.
This issue is made worse by the greater difference in focal length between the wide and telephoto cameras. In prior years, it was only going from 1× to 2×. Now, the system crops by a factor of 3×. Now consider that the megapixel count isn’t exactly pushing boundaries, when you crop it by that much, you end up with, well…
Apple is doing some funny things to avoid pixelation. In the image above, it ‘fills in’ details in the Speed Limit sign’s text fairly well, keeping it legible. Other areas, however, are modified in seemingly bizarre ways, even making the suspension cables of the bridge disappear around lampposts for inexplicable reasons:
This is where a frequent problem I run into with the 13 Pro. Its complex, interwoven set of ‘smart’ software components don’t fit together quite right. The excellent-but-megapixel-limited Wide camera attempts to ‘hide’ the switch to the telephoto camera, creating a smudgy image. Intelligent software upscaling is stepping in to make the image look less pixel-y, which alters the look off the image significantly — it resembles a painting more than a photograph when blown up.
When switching lenses, I have to hope the camera will switch to the actual telephoto camera, but if the sensor can’t collect enough light, the image suffers from an ‘overprocessed’ look. The image is either an upscaled crop from the Wide camera, or a very heavily noise-reduced frame from the telephoto sensor.
I was a big fan of this shot I took with the telephoto camera:
I can see why: in this low-light scene, the image from the telephoto is getting heavily smoothed. I have to give major props to how much detail is both retained and enhanced, but the image does end up looking unnatural. Focusing on details makes the image look a bit too smudged:
Compare this to a pure (not in-camera processed) RAW image from the same telephoto camera, and you can see why Apple made this choice:
This is where visible noise enters the shot, something Apple’s camera fights hard against. I personally don’t mind it so much. In fact, I think in a dark scene, it adds to the texture, creating a more ‘realistic’ image than the smoothed-over rain shot.
In the end, this is a creative choice. Apple is doing true magic with its processing; it gets very usable, detailed images out of a very small sensor with a lens that just can’t collect the light necessary for great handheld nighttime images. This is without even considering the wild processing that Night Mode enables. If you are a photographer, you should be aware of this processing, however — and make an informed decision if you want to use it or not.
I’ve made up my mind after a few months: For me, the iPhone’s processing on iPhone 13 Pro is simply too heavy-handed at times.
A confusing moment arose when iOS developer Mitch Cohen thought his iPhone camera had replaced a poor woman’s head in a photo with a mess of leaves:
This leaf face mystery has swept the world the last day, and @mitchcohen was nice enough to entertain my request to do a deep analysis on the files to see what could’ve happened. AI gone wrong? Censorship? Is this the blockchain at work? https://t.co/uheHdAsdl3
In the end, it turns out this was just a trick of the eye. But the processing on iPhone photos does make the leaves look smudged; the way edges are enhanced and smudging is applied over the shot to reduce noise makes it hard to really makes sense of the image. This is a tell-tale iPhone 13 Pro photo: no noise, but a lot of ‘scrambled’ edges and an almost ‘painterly’ look to the whole thing:
I have taken to going about picking my processing. In the daylight, I will often shoot with our camera app set to native RAW. Shooting in this way will skip most of the aggressive noise reduction and detail enhancement that are omnipresent in iPhone 13 Pro photos. The slight grain is appealing to me, and I enjoy the slightly less ‘processed appearance’. At night, I will opt for the magic that helps me get great shots in the dark: Night mode, ProRAW noise reduction and detail enhancement.
Being conscientious of the magic in computational photography helps me appreciate it so much more.
Is the noise really that bothersome in these images?
As I mentioned earlier, the most important creative choice we will have to start making as photographers is to choose the amount of processing on our images by our cameras. If we can make such a choice, at least. Sometimes, the choice is made for you.
A final example of iPhone’s computational processing being hard to avoid:
Advertisement
Since iPhone 12 Pro, Apple has introduced its own RAW format for shots. Previously, taking RAW photos meant you lost out on all of Apple’s magic and powerful image-processing — photos often required lots of editing when shot in RAW vs. JPG. Images were noisy and lacked dynamic range; after all, in regular JPG shots the highlights and shadows were brought out through taking multiple exposures. Enabling ProRAW gives users a way to get this smart processing, with all the flexibility and image quality of a RAW file.
Unfortunately, it doesn’t let you *opt out* of some of this processing. My biggest issue is not being able to opt out of noise reduction, but on iPhone 13 Pro there’s a far bigger issue. Even if you enable ProRAW in the first party camera, switching lenses in Apple’s Camera app does not mean it will actually switch to the proper camera. Several shots I captured in telephoto mode resulted in a cropped ProRAW image coming from the wrong lens:
A ProRAW shot taken at 77mm. Or is it? It was actually an upscaled Wide shot, despite my efforts.
Yes, I am aware: This isn’t an iPhone-exclusive problem. The Google Pixel, too, opaquely switches between cameras when it deems it necessary, offering as little as a 0.3 megapixel crop of another camera even when shooting in “RAW.”
The only solution — outside of using an app like Halide — is to watch your viewfinder closely after you change your subject, to see if the camera has switched. This can take up to several seconds, unfortunately.
We find this crosses a border that computational photography shouldn’t cross in a professional context. RAW capture should be explicit, not a surprise. In a RAW capture format, the camera should honor user intent and creative choice. If the user picks a lens— even the ‘wrong’ lens— the software should use that lens. If the ‘Camera’ considers that to be a mistake, I don’t want it to stop me from taking the photo. I want to make my own mistakes. As they say: creativity is allowing yourself to make mistakes. Art is knowing which ones to keep.
Advertisement
I understand the first party camera isn’t meant for professionals, and you can still get great photos out of this camera. I still believe that Apple is in a hard place: the Camera app has to work for every single iPhone user, from the novice to the seasoned photographer with decades of experience.
There’s a paradox in the relationship between hardware and software: it’s easy to make smarter software with bad hardware. The decisions are black and white. But as the hardware gets more sophisticated, the decisions involve more shades of grey. How can science quantify an acceptable level of noise or motion blur, when so much of this depends on artistic intent of each photo? As the complexity of computational photography grows, so grows its dominion over our creative decisions, and I am increasingly finding myself at odds with the decisions it makes.
I think that as this trend continues, choosing what level of software processing to allow to your camera’s data is increasingly going to become the most important creative choice in photography.
Closing Thoughts
iPhone 13 Pro is a big shift in iPhone photography. Not only are the cameras all upgraded in significant ways, but Apple’s adaptive, clever computational smarts have never been so powerful. Touching every aspect of the photographic experience, you might be surprised to at times become aware of its power and limitations alike.
Advertisement
If you’re coming from an iPhone 12 and don’t take a lot of photos, you may not see a leap in quality, because so much is now determined by the existing processing which was already fantastic on the previous generation. However, the iPhone 13 Pro makes other leaps forward — with physical upgrades in the ultra-wide camera and 3× telephoto camera. Unfortunately, that magical processing sometimes works against the hardware.
If you’re a serious photographer, the iPhone 13 Pro is a brilliant camera — and you will only begin to scratch the surface of its potential with its built-in software and processing. When using apps like Halide or other third-party applications, the possibilities really begin to present themselves. Apple here has laid the foundation; much like with the LIDAR sensor that was added in the previous iPhone, the camera improvements here lay the groundwork for software to perform magic.
The only thing you need to add is your own creative vision.
All images in this review were shot on iPhone 13 Pro by Sebastiaan de With.
This time around we’re comparing the the smallest flagships from the two largest smartphone manufacturers in the world. This is the Apple iPhone 16 Pro vs Samsung Galaxy S24comparison. Granted, the iPhone 16 Pro is not the base model in the iPhone 16 series, but it is the smallest flagship in the series aka the smallest ‘Pro’ iPhone 16 model. So, this comparison does make sense, as the Galaxy S24 is by far the smallest smartphone in the Galaxy S24 family.
With that being said, the iPhone 16 vs Galaxy S24 comparison is also on the way. The iPhone 16 Pro is notably more expensive than the Galaxy S24, so keep that in mind. We will first list the specifications of these two smartphones, and will then move to compare them across a number of different sections. We’ll compare the designs of the two phones, their displays, performance, battery, cameras, and audio output. Let’s get down to it.
Specs
Apple iPhone 16 Pro vs Samsung Galaxy S24, respectively
The iPhone 16 Pro is made out of titanium and glass. On the flip side, the Galaxy S24 utilizes aluminum and glass. Both smartphones have flat sides all around, which are curved towards the very edges. They both include flat front and back sides too, and have a similar curvature on the edges. Well, the iPhone 16 Pro is curved more in that area, but neither phone is close to having sharp edges.
Apple’s handset has a pill-shaped cutout at the top of the display, the so-called Dynamic Island. Samsung’s device has a small display camera hole up there. Both devices do have very thin bezels around the display, which are also uniform. On the right-hand side of the iPhone 16 Pro you’ll find a power/lock key and the Camera Control button. On the left, the volume up and down buttons are located, along with an Action Button. The Galaxy S24, on the other hand, has the power/lock key on the right, along with the volume up and down buttons, and that’s it.
Both smartphones have three cameras on the back, but those setups look considerably different. The iPhone 16 Pro has its recognizable camera island in the top-left corner. The Galaxy S24’s cameras protrude directly from the backplate and are vertically-aligned in the top-left corner. The iPhone 16 Pro does have a slightly bigger display, and it’s taller and wider than the Galaxy S24, while also being thicker and heavier. It’s over 30 grams heavier. Both smartphones offer an IP68 certification for water and dust resistance. They’re both quite slippery too, but very comfortable to hold.
Apple iPhone 16 Pro vs Samsung Galaxy S24: Display
The iPhone 16 Pro features a 6.3-inch 2622 x 1206 LTPO Super Retina XDR OLED display. That panel is flat, and it has a 120Hz refresh rate. HDR10 content is supported, as is Dolby Vision. The maximum brightness here is set at 2,000 nits. The screen-to-body ratio is at around 90%, while the display aspect ratio is 19.5:9. The Ceramic Shield glass is placed on top of this phone’s display.
Advertisement
The Samsung Galaxy S24, on the flip side, has a 6.2-inch 2340 x 1080 Dynamic LTPO AMOLED 2X display. This display has a 120Hz refresh rate and supports HDR10+ content. It also offers a 2,600 nits peak brightness. The screen-to-body ratio is at around 90%, while the display aspect ratio is 19.5:9. The Gorilla Glass Victus 2 from Corning is protecting this phone’s display.
Both of these panels are really good. They’re quite vivid and more than sharp enough. They also have very good viewing angles, and the touch response is very good. These displays do not have a high-frequency PWM dimming, though, so keep that in mind. The blacks are deep on both, and both have a high refresh rate. The Galaxy S24 can technically get brighter, but in practice, the difference is not that big at all. They’re both bright enough.
Apple iPhone 16 Pro vs Samsung Galaxy S24: Performance
The Apple A18 Pro is a 3nm processor which fuels the iPhone 16 Pro. That is Apple’s most powerful chip. The company also included 8GB of RAM here, along with NVMe flash storage. The Galaxy S24 is fueled by the Snapdragon 8 Gen 3 (4nm) or Exynos 2400 (4nm) chips, depending on the market. We used the Snapdragon 8 Gen 3 model. Samsung also included 8GB of LPDDR5X RAM inside the phone, along with UFS 3.1 or UFS 4.0 flash storage. UFS 3.1 flash storage is included in the 128GB storage option only.
Having said that, both smartphones do offer really good performance. In regular, day-to-day tasks, they both perform great. They’re snappy whatever you’re doing, and the high refresh rate helps keep things looking really nice while you’re scrolling around. Getting either phone to slow down is not that easy. They can jump between apps without a problem and are great for browsing, messaging, emailing, multimedia consumption, image editing, video processing, and so on.
Advertisement
The iPhone 16 Pro technically has more prowess on the gaming side of things. It has a more powerful chip and GPU, but the Galaxy S24 keeps up in terms of performance. No matter what game you throw at these two phones, they’ll do a great job. They will get warm after a while, but neither phone will get visibly affected by that, at all. Neither phone becomes to hot to hold either.
Apple iPhone 16 Pro vs Samsung Galaxy S24: Battery
The iPhone 16 Pro battery capacity has finally been revealed, the phone includes a 3,582mAh battery, so a 9.4% larger battery pack than its predecessor. The Galaxy S24 includes a 4,000mAh battery pack. Apple’s iPhones usually have smaller battery packs than their Android counterparts. In this case the difference is not that big, and the iPhone 16 Pro does offer better battery life in comparison… it’s not even close.
The Galaxy S24 can even struggle to get to the 6-hour screen-on-time mark, it tends to be closer to 5-5.5 hours. The iPhone 16 Pro can go above and beyond that. The iPhone 15 Pro offered really good battery life, and the iPhone 16 Pro flies above that. Getting to the 7-hour screen-on-time mark on this phone does seem doable, but it will depend on a number of factors, of course. Your mileage may vary.
When it comes to charging, the iPhone 16 Pro supports 38W wired, 25W MagSafe wireless, 15W Qi2 wireless, 7.5W Qi wireless, and 5W reverse wired charging. The Galaxy S24 supports 25W wired, 15W wireless, and 4.5W reverse wireless charging. Do note that neither of these two smartphones ships with a charger in the retail box. You’ll have to buy one separately if you don’t already own it.
Advertisement
Apple iPhone 16 Pro vs Samsung Galaxy S24: Cameras
You’ll find three cameras on the back of both of these phones. The iPhone 16 Pro has a 48-megapixel main camera (1/1.28-inch camera sensor), a 48-megapixel ultrawide unit, and a 12-megapixel periscope telephoto camera (5x optical zoom). The Galaxy S24 includes a 50-megapixel main camera (1/1.56-inch camera sensor), a 12-megapixel ultrawide unit (120-degree FoV), and a 10-megapixel telephoto unit (3x optical zoom).
Both of these phones do a good job in the camera department, but the iPhone 16 Pro pulls ahead. It has a more capable main camera, and that shows in the final product. Both phones tend to provide images with warmer tones, but the ones from the iPhone 16 Pro have a better balance overall. The Galaxy S24 can overdo it with sharpening and saturation at times, the photos also don’t look as well-rounded. The iPhone 16 Pro does tend to brighten up the darker portions of images in HDR situations a bit too much, which makes the images look flatter than it should. They both do a very good job in low light, but once again, the iPhone 16 Pro is better most of the time.
The iPhone 16 Pro has a telephoto camera that offers more versatility in comparison, and the shots from it mostly look a bit better. Its ultrawide camera also tends to provide more detail than Samsung’s, but both do a good job of keeping the color profile similar to what their main shooters provide.
Audio
Stereo speakers are included on both smartphones, and they both offer good performance. The sound output is well-balanced, and not too sharp or anything. They’re both loud enough and similar in that regard.
Advertisement
There is no audio jack on either one of these two smartphones, though. You’ll need to use their Type-C ports if you want to hook up your wired headphones. Alternatively, Bluetooth 5.3 is on offer for wireless connectivity.
In this episode of News of The Week, the US government has filed a lawsuit against Adobe, accusing the software giant of deceptive subscription practices that make it difficult for users to cancel their subscriptions.
InfluxData on Wednesday unveiled new features for its InfluxDB 3.0 product suite aimed at speeding and simplifying time series data management at scale, including performance improvements and a new operational dashboard.
In addition, the vendor made generally available InfluxDB Clustered, a self-managed version of its database for on-premises and private cloud deployments first unveiled in September 2023.
Based in San Francisco, InfluxData is a time series database specialist and the creator and lead sponsor of InfluxDB, an open source database designed specifically to manage the data that enables time series analysis.
The vendor raised $81 million in financing in February 2023 to bring its total funding to more than $200 million. Two months later, InfluxData unveiled InfluxDB 3.0. The product suite includes InfluxDB Cloud Serverless and InfluxDB Cloud Dedicated, both of which are managed by InfluxData, and now InfluxDB Clustered as well for self-managed users.
Advertisement
One of the key upgrades in InfluxDB 3.0 was enabling unlimited cardinality, which refers to the uniqueness of the values in a database column — a high level of distinctness means the column has high cardinality.
Other key upgrades included high throughput to enable users to ingest, transform and analyze hundreds of millions of time series data points per second, significantly faster real-time query response times, increased data compression to reduce storage costs and support for SQL to simplify analysis.
The new features add to those that initially comprise InfluxDB 3.0 and are aimed at helping InfluxData stand out in a competitive market, according to IDC analyst Carl Olofson. Other time series database specialists include Grafana and Prometheus, while tech giants AWS, Google, IBM and Microsoft are among others offering time series databases.
“The [keys] are size and speed,” Olofson said. “The time series field has, in recent years, become very competitive. InfluxData is clearly looking to stand out, realizing that as users develop more complex networks of data sources — including edge devices — the challenge of applying a single analysis against all that data is becoming overwhelming.”
Advertisement
New capabilities
Time series data is data that is time stamped so that an enterprise’s changes can be observed over time.
Meanwhile, just as more data sources are resulting in an increase in the overall volume of data enterprises now collect, the number of sources and resulting data volume that enable changes to be tracked over time are also rising.
In response, InfluxData and its peers have developed databases that specialize in managing time series data. Common characteristics of such databases include optimization for large-scale workloads, high-performance reading and writing capabilities to enable real-time analysis, processes for managing data lifecycles so that older data can be retained and found, and filters specific to time-based queries.
InfluxDB 3.0’s initial launch represented a complete overhaul of the database’s underlying engine. Along with the new underlying engine, the release addressed and added some of those common characteristics such as high performance and capabilities to enable real-time analysis.
Advertisement
Now, the latest release of InfluxDB 3.0 is aimed at increasing the database engine’s performance as well as simplifying its use.
High cardinality is the key here. You can do time series queries and analysis on much larger data sets with high performance than was possible before. Carl OlofsonAnalyst, IDC
The update includes improved query concurrency and scaling to better handle high-cardinality data. In addition, InfluxDB 3.0 now has a new operational dashboard that provides visual insights into the performance and health of data clusters so that developers can address unintended workload changes, identify bottlenecks and optimize performance. A new single sign-on streamlines the log-in process. And new APIs have been added that let users automate certain repetitive tasks.
“High cardinality is the key here,” Olofson said. “You can do time series queries and analysis on much larger data sets with high performance than was possible before.”
Rachel Stephens, an analyst at RedMonk, similarly said that continuing to address cardinality is key for InfluxData.
Advertisement
She noted that time series databases have historically struggled with high cardinality use cases. InfluxDB 3.0’s initial release improved InfluxData’s handling of high-cardinality workloads, with the new release adding further performance.
“InfluxDB 3.0 potentially opens up new space in the market for the database to be a performant option in [high cardinality] situations,” Stephens said.
While the InfluxDB 3.0 update addresses performance, the launch of InfluxDB Clustered extends the database engine’s capabilities to more of the vendor’s users.
When InfluxDB 3.0 was first released, it was available to only users of InfluxDB Cloud Serverless and InfluxDB Cloud Dedicated, which are both fully managed database services. On-premises and private cloud users had only InfluxDB Enterprise — which was not built with InfluxDB 3.0’s engine — as an option.
Advertisement
InfluxDB Clustered essentially replaces InfluxDB Enterprise. Its significance, therefore, is that it provides on-premises and private cloud customers with the same capabilities as users of InfluxData’s fully managed databases, according to Stephens.
“InfluxDB Clustered is the successor product to InfluxDB Enterprise,” she said. “InfluxDB Clustered brings the columnar database engine to customers’ self-managed environments.”
The impetus for the InfluxDB 3.0 improvements and launch of InfluxDB Clustered came from InfluxData’s goal of providing developers tools that allow them to efficiently manage time series workloads at scale, according to Gary Fowler, the vendor’s vice president of products.
“As workloads continue to expand, developers need sophisticated systems that can handle large data sets without compromising performance,” he said. “InfluxDB 3.0 is engineered to meet these challenges head-on, offering the tools necessary to manage time series data at scale.”
In the future
With the full suite of InfluxDB 3.0 products now generally available, InfluxData’s roadmap is focused on continuing to add new features and functionality, according to Fowler.
Currently, Amazon Timestream for InfluxDB is based on a pre-InfluxDB 3.0 engine, which makes it an option for open source users with small, low cardinality workloads. Now, InfluxData is working to bring InfluxDB 3.0 to Amazon Timestream for InfluxDB along with other features not yet available to open source users.
Advertisement
“These enhancements will provide greater flexibility, performance and security for our users as they manage their time series data in the cloud,” Fowler said.
Eric Avidon is a senior news writer for TechTarget Editorial and a journalist with more than 25 years of experience. He covers analytics and data management.
TwitchCon San Diego is taking place this weekend and, as always, the platform had some news to share during the opening ceremony. For one thing, Twitch CEO Dan Clancy said the service will offer streamers and viewers who break the rules more clarity over why their accounts were suspended.
Soon, Twitch will share any chat excerpt that led to a suspension with the user in question via email and the appeals portal. Eventually, this will expand to clips, so streamers can see how they were deemed to have broken the rules on a livestream or VOD. “We want to give you this information so that you can see what you did, what policies were violated, and if you feel our decision was incorrect, you can appeal,” Twitch wrote in a blog post.
The service is also aware that permanent strikes on an account can pose a problem for long-time streamers who may eventually get banned for a smaller slip up. To that end, Twitch is bringing in a strike expiration policy starting in early 2025. “Low-severity strikes will no longer put streamers’ livelihoods at risk, but we’ll still enforce the rules for major violations,” Twitch said. “Plus, we’re adding more transparency by showing you exactly what led to a strike.”
On the broadcasting front, viewers of streamers who are using Twitch’s Enhanced Broadcasting feature will be able to watch streams in 2K starting early next year. This option will be available in select regions at first, with Twitch planning to expand it elsewhere throughout 2025. Also of note, Clancy said that “we’re working on 4K.”
Advertisement
Also coming in 2025 is the option for those using Enhanced Broadcasting to stream vertical and landscape video at the same time. The idea here is to offer viewers an optimal experience depending on which device they’re using to watch streams.
Elsewhere, Twitch is planning some improvements to navigation in its overhauled mobile app, such as letting you access your Followed channels with a single swipe and prioritizing audio from the picture-in-picture player. Streamers will have access to a feature called Clip Carousel, which will highlight the best clips from their latest stream and make them easy to share on desktop and mobile. The platform says it’ll be easier for viewers to create clips on mobile devices too.
In addition, Twitch will roll out a shared chat option in the Stream Together feature next week, allowing up to six creators who are streaming together to combine their chats. Streamers’ mods will be able to moderate all of the messages in a shared chat and time out or ban anyone who crosses a line. Creators who hop on a Stream Together session can also turn off Shared Chat for their own community.
Last but not least, Twitch will expand its Unity Guilds and Creator Clubs. The idea behind both is to help streamers forge connections, learn from each other and grow with the help of Twitch staff. Over the last year, Twitch has opened up the Black Guild, Women’s Guild and Hispanic and Latin Guild, and it just announced a Pride Guild for the LGBTQIA+ community. All four guilds will expand to accept members from around the world next year.
Advertisement
Creator Clubs are a newer thing that Twitch debuted last month for the DJ and IRL categories. Twitch says that engagement has been higher than expected. Four more Creator Clubs are coming soon for the Artists/Makers, Music, VTubers and Coworking/Coding categories.
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
Adversarial attacks on machine learning (ML) models are growing in intensity, frequency and sophistication with more enterprises admitting they have experienced an AI-related security incident.
AI’s pervasive adoption is leading to a rapidly expanding threat surface that all enterprises struggle to keep up with. A recent Gartner survey on AI adoption shows that 73% of enterprises have hundreds or thousands of AI models deployed.
HiddenLayer’s earlier study found that 77% of the companies identified AI-related breaches, and the remaining companies were uncertain whether their AI models had been attacked. Two in five organizations had an AI privacy breach or security incident of which 1 in 4 were malicious attacks.
Advertisement
A growing threat of adversarial attacks
With AI’s growing influence across industries, malicious attackers continue to sharpen their tradecraft to exploit ML models’ growing base of vulnerabilities as the variety and volume of threat surfaces expand.
Adversarial attacks on ML models look to exploit gaps by intentionally attempting to redirect the model with inputs, corrupted data, jailbreak prompts and by hiding malicious commands in images loaded back into a model for analysis. Attackers fine-tune adversarial attacks to make models deliver false predictions and classifications, producing the wrong output.
VentureBeat contributor Ben Dickson explains how adversarial attacks work, the many forms they take and the history of research in this area.
Gartner also found that 41% of organizations reported experiencing some form of AI security incident, including adversarial attacks targeting ML models. Of those reported incidents, 60% were data compromises by an internal party, while 27% were malicious attacks on the organization’s AI infrastructure. Thirty percent of all AI cyberattacks will leverage training-data poisoning, AI model theft or adversarial samples to attack AI-powered systems.
Advertisement
Adversarial ML attacks on network security are growing
Disrupting entire networks with adversarial ML attacks is the stealth attack strategy nation-states are betting on to disrupt their adversaries’ infrastructure, which will have a cascading effect across supply chains. The 2024 Annual Threat Assessment of the U.S. Intelligence Community provides a sobering look at how important it is to protect networks from adversarial ML model attacks and why businesses need to consider better securing their private networks against adversarial ML attacks.
A recent study highlighted how the growing complexity of network environments demands more sophisticated ML techniques, creating new vulnerabilities for attackers to exploit. Researchers are seeing that the threat of adversarial attacks on ML in network security is reaching epidemic levels.
The quickly accelerating number of connected devices and the proliferation of data put enterprises into an arms race with malicious attackers, many financed by nation-states seeking to control global networks for political and financial gain. It’s no longer a question of if an organization will face an adversarial attack but when. The battle against adversarial attacks is ongoing, but organizations can gain the upper hand with the right strategies and tools.
Cisco, Cradlepoint( a subsidiary of Ericsson), DarkTrace, Fortinet, Palo Alto Networks, and other leading cybersecurity vendors have deep expertise in AI and ML to detect network threats and protect network infrastructure. Each is taking a unique approach to solving this challenge. VentureBeat’s analysis of Cisco’s and Cradlepoint’s latest developments indicates how fast vendors address this and other network and model security threats. Cisco’s recent acquisition of Robust Intelligence accentuates how important protecting ML models is to the network giant.
Advertisement
Understanding adversarial attacks
Adversarial attacks exploit weaknesses in the data’s integrity and the ML model’s robustness. According to NIST’s Artificial Intelligence Risk Management Framework, these attacks introduce vulnerabilities, exposing systems to adversarial exploitation.
There are several types of adversarial attacks:
Data Poisoning: Attackers introduce malicious data into a model’s training set to degrade performance or control predictions. According to a Gartner report from 2023, nearly 30% of AI-enabled organizations, particularly those in finance and healthcare, have experienced such attacks. Backdoor attacks embed specific triggers in training data, causing models to behave incorrectly when these triggers appear in real-world inputs. A 2023 MIT study highlights the growing risk of such attacks as AI adoption grows, making defense strategies such as adversarial training increasingly important.
Evasion Attacks: These attacks alter input data to mispredict. Slight image distortions can confuse models into misclassified objects. A popular evasion method, the Fast Gradient Sign Method (FGSM) uses adversarial noise to trick models. Evasion attacks in the autonomous vehicle industry have caused safety concerns, with altered stop signs misinterpreted as yield signs. A 2019 study found that a small sticker on a stop sign misled a self-driving car into thinking it was a speed limit sign. Tencent’s Keen Security Lab used road stickers to trick a Tesla Model S’s autopilot system. These stickers steered the car into the wrong lane, showing how small carefully crafted input changes can be dangerous. Adversarial attacks on critical systems like autonomous vehicles are real-world threats.
Advertisement
Model Inversion: Allows adversaries to infer sensitive data from a model’s outputs, posing significant risks when trained on confidential data like health or financial records. Hackers query the model and use the responses to reverse-engineer training data. In 2023, Gartner warned, “The misuse of model inversion can lead to significant privacy violations, especially in healthcare and financial sectors, where adversaries can extract patient or customer information from AI systems.”
Model Stealing: Repeated API queries are used to replicate model functionality. These queries help the attacker create a surrogate model that behaves like the original. AI Security states, “AI models are often targeted through API queries to reverse-engineer their functionality, posing significant risks to proprietary systems, especially in sectors like finance, healthcare, and autonomous vehicles.” These attacks are increasing as AI is used more, raising concerns about IP and trade secrets in AI models.
Recognizing the weak points in your AI systems
Securing ML models against adversarial attacks requires understanding the vulnerabilities in AI systems. Key areas of focus need to include:
Data Poisoning and Bias Attacks: Attackers target AI systems by injecting biased or malicious data, compromising model integrity. Healthcare, finance, manufacturing and autonomous vehicle industries have all experienced these attacks recently. The 2024 NISTreport warns that weak data governance amplifies these risks. Gartner notes that adversarial training and robust data controls can boost AI resilience by up to 30%. Implementing secure data pipelines and constant validation is essential to protecting critical models.
Advertisement
Model Integrity and Adversarial Training: Machine learning models can be manipulated without adversarial training. Adversarial training uses adverse examples and significantly strengthens a model’s defenses. Researchers say adversarial training improves robustness but requires longer training times and may trade accuracy for resilience. Although flawed, it is an essential defense against adversarial attacks. Researchers have also found that poor machine identity management in hybrid cloud environments increases the risk of adversarial attacks on machine learning models.
API Vulnerabilities: Model-stealing and other adversarial attacks are highly effective against public APIs and are essential for obtaining AI model outputs. Many businesses are susceptible to exploitation because they lack strong API security, as was mentioned at BlackHat 2022. Vendors, including Checkmarx and Traceable AI, are automating API discovery and ending malicious bots to mitigate these risks. API security must be strengthened to preserve the integrity of AI models and safeguard sensitive data.
Best practices for securing ML models
Implementing the following best practices can significantly reduce the risks posed by adversarial attacks:
Robust Data Management and Model Management:NIST recommends strict data sanitization and filtering to prevent data poisoning in machine learning models. Avoiding malicious data integration requires regular governance reviews of third-party data sources. ML models must also be secured by tracking model versions, monitoring production performance and implementing automated, secured updates. BlackHat 2022 researchers stressed the need for continuous monitoring and updates to secure software supply chains by protecting machine learning models. Organizations can improve AI system security and reliability through robust data and model management.
Advertisement
Adversarial Training: ML models are strengthened by adversarial examples created using the Fast Gradient Sign Method (FGSM). FGSM adjusts input data by small amounts to increase model errors, helping models recognize and resist attacks. According to researchers, this method can increase model resilience by 30%. Researchers write that “adversarial training is one of the most effective methods for improving model robustness against sophisticated threats.”
Homomorphic Encryption and Secure Access: When safeguarding data in machine learning, particularly in sensitive fields like healthcare and finance, homomorphic encryption provides robust protection by enabling computations on encrypted data without exposure. EY states, “Homomorphic encryption is a game-changer for sectors that require high levels of privacy, as it allows secure data processing without compromising confidentiality.” Combining this with remote browser isolation further reduces attack surfaces ensuring that managed and unmanaged devices are protected through secure access protocols.
API Security: Public-facing APIs must be secured to prevent model-stealing and protect sensitive data. BlackHat 2022 noted that cybercriminals increasingly use API vulnerabilities to breach enterprise tech stacks and software supply chains. AI-driven insights like network traffic anomaly analysis help detect vulnerabilities in real time and strengthen defenses. API security can reduce an organization’s attack surface and protect AI models from adversaries.
Regular Model Audits: Periodic audits are crucial for detecting vulnerabilities and addressing data drift in machine learning models. Regular testing for adversarial examples ensures models remain robust against evolving threats. Researchers note that “audits improve security and resilience in dynamic environments.” Gartner’s recent report on securing AI emphasizes that consistent governance reviews and monitoring data pipelines are essential for maintaining model integrity and preventing adversarial manipulation. These practices safeguard long-term security and adaptability.
Advertisement
Technology solutions to secure ML models
Several technologies and techniques are proving effective in defending against adversarial attacks targeting machine learning models:
Differential privacy: This technique protects sensitive data by introducing noise into model outputs without appreciably lowering accuracy. This strategy is particularly crucial for sectors like healthcare that value privacy. Differential privacy is a technique used by Microsoft and IBM among other companies to protect sensitive data in their AI systems.
AI-Powered Secure Access Service Edge (SASE): As enterprises increasingly consolidate networking and security, SASE solutions are gaining widespread adoption. Major vendors competing in this space include Cisco, Ericsson, Fortinet, Palo Alto Networks, VMware and Zscaler. These companies offer a range of capabilities to address the growing need for secure access in distributed and hybrid environments. With Gartner predicting that 80% of organizations will adopt SASE by 2025 this market is set to expand rapidly.
Ericsson distinguishes itself by integrating 5G-optimized SD-WAN and Zero Trust security, enhanced by acquiring Ericom. This combination enables Ericsson to deliver a cloud-based SASE solution tailored for hybrid workforces and IoT deployments. Its Ericsson NetCloud SASE platform has proven valuable in providing AI-powered analytics and real-time threat detection to the network edge. Their platform integrates Zero Trust Network Access (ZTNA), identity-based access control, and encrypted traffic inspection. Ericsson’s cellular intelligence and telemetry data train AI models that aim to improve troubleshooting assistance. Their AIOps can automatically detect latency, isolate it to a cellular interface, determine the root cause as a problem with the cellular signal and then recommend remediation.
Advertisement
Federated Learning with Homomorphic Encryption: Federated learning allows decentralized ML training without sharing raw data, protecting privacy. Computing encrypted data with homomorphic encryption ensures security throughout the process. Google, IBM, Microsoft, and Intel are developing these technologies, especially in healthcare and finance. Google and IBM use these methods to protect data during collaborative AI model training, while Intel uses hardware-accelerated encryption to secure federated learning environments. Data privacy is protected by these innovations for secure, decentralized AI.
Defending against attacks
Given the potential severity of adversarial attacks, including data poisoning, model inversion, and evasion, healthcare and finance are especially vulnerable, as these industries are favorite targets for attackers. By employing techniques including adversarial training, robust data management, and secure API practices, organizations can significantly reduce the risks posed by adversarial attacks. AI-powered SASE, built with cellular-first optimization and AI-driven intelligence has proven effective in defending against attacks on networks.
VB Daily
Stay in the know! Get the latest news in your inbox daily
Flow, Adam Neumann’s co-living startup, opened a compound with 238 apartments in Saudi Arabia’s capital, Riyadh, and Forbes has some details. The opening included an Aztec-themed hot chocolate ceremony and tote bags with the words “holy s— I’m alive” on them. The rent for the furnished units starts at $3,500 a month and includes hotel-style services such as laundry and housekeeping and amenities like pools, co-ed gyms (unusual in Saudi Arabia), and bowling alleys. Flow is building three other properties with nearly 1,000 apartments in Riyadh.
The company’s first but less luxurious properties were opened in Fort Lauderdale and Miami in April.
Flow raised $350 million from Andreessen Horowitz in 2022. The funding raised eyebrows given the problematic history of Neumann’s previous startup, WeWork. Once valued at $47 billion, WeWork filed for bankruptcy protection last year and was ultimately acquired by Yardi, a real estate group, for $450 million.
You must be logged in to post a comment Login