Connect with us

Technology

A Small Step, A Huge Leap

Published

on

A Small Step, A Huge Leap

When the iPhone was first introduced, it packed in quite a few innovations. At a now-legendary keynote, Steve Jobs announced that Apple was launching a widescreen iPod, a phone and an internet communications device. These three devices were not separate new products, he revealed — they were all functions of a revolutionary new device: the iPhone.

“Are you getting it? These are not three separate devices…”

If introduced today, we’d find a conspicuous absence in that lineup: a groundbreaking new camera. It was not one of the devices he mentioned, and for good reason: The iPhone launched with a 2-megapixel, fixed-focus shooter. There was no pretense about this replacing your camera for taking photos.

Fast forward to today, and the main feature-point of most smartphone launches is, of course, their cameras. Plural. In the time since the iPhone launched, it has gone through small steps and big leaps and from a single fixed-focus camera to an entire complex array of cameras.

With iPhone 14 Pro, Apple has made a huge change to its entire camera system — and yet, it seems many reviewers and publications cannot agree on whether it is a small step or a huge leap.

Advertisement

At Lux, we make a camera app. We have been taking thousands of iPhone photos every month since we launched our app five years ago, and we know a thing or two about smartphone cameras. This is our deep dive into iPhone 14 Pro — not the phone, the iPod, or the internet communicator — but the camera.

Changes

We previously took a look at the technical readout of the iPhone 14 Pro, offering us a chance to look at the camera hardware changes. Our key takeaway was that there are indeed some major changes, even if just on paper. The rear camera bump is almost all-new: the iPhone 14 Pro gains larger sensors in its ultra-wide and main (wide) cameras, but Apple also promises leaps in image quality through improved software processing and special silicon.

However, what caught everyone’s eye first was the opener of Apple’s presentation of iPhone 14 Pro: a striking visual change. The iPhone’s recognizable screen cutout or ‘notch’ had all but disappeared, tucking its camera and sensor hardware into a small yet dynamic ‘island’. The user interface adapts around it; growing and shrinking with the screen cutout in an absolute feat of design. What’s more impressive to us, however, was miniaturizing the complex and large array of sensors and camera needed for Face ID.

Front-Facing

While the large cameras protruding ever-further from the rear of our iPhones capture our attention first — and will certainly get the most attention in this review, as well — the front-facing camera in the iPhone 14 Pro saw one of the biggest upgrades in recent memory, with its sensor, lens and software processing seeing a significant overhaul.

While the sensor size of the front-facing camera isn’t massive (nor do we believe it has to be), upgrades in its lens and sensor allowed for some significant improvements in dynamic range, sharpness and quality. The actual jump between a previous-generation iPhone’s front camera and this new shooter are significant enough for most people to notice immediately. In our testing, the iPhone 14 Pro achieved far sharper shots with vastly — and we mean vastly — superior dynamic range and detail.

Advertisement

The previous cameras were simply not capable of delivering very high-quality images or video in challenging mixed light or backlit subjects. We’re seeing some significant advancements made through better software processing (something Apple calls the Photonic Engine) and hardware.

While the sensor is larger and there is now variable focus (yes, you can use manual focus on the selfie camera with an app like Halide now!) you shouldn’t expect beautiful bokeh; the autofocus simply allows for much greater sharpness across the frame, with a slight background blur when your subject (no doubt a face) is close enough. Most of the time it’s subtle, and very nice.

Notable is that the front facing camera is able to focus quite close — which can result in some pleasing shallow depth of field between your close-up subject and the background:

Low light shots are far more usable, with less smudging apparent. Impressively, the TrueDepth sensor also retains incredibly precise depth data sensing, despite its much smaller package in the Dynamic Island cutout area.

This is one of the times where Apple glosses over a very significant technological leap that a hugely talented team no doubt worked hard on. The competing Android flagship phones have not followed Apple down the technological path of high-precision, infrared-based Face ID, as it requires large sensors that create screen cutouts.

The ‘notch’ had shrunk in the last generation of iPhones, but this far smaller array retains incredible depth sensing abilities that we haven’t seen another product even approximate. And while shipping a software feat — the dynamic island deserves all the hype it gets — Apple also shipped a significant camera upgrade that every user will notice in day to day usage.

Advertisement

Ultra-Wide

Time to break into that large camera bump on the rear of iPhone 14 Pro. The ultra-wide camera, introduced with iPhone 11 in 2019, has long played a background role to the main wide camera due to its smaller sensor and lack of sharpness.

Last year, Apple surprised us by giving the entire ultra-wide package a significant upgrade: a more complex lens design allowed for autofocus and extremely-close focus for macro shots, and a larger sensor collected more light and allowed for far more detailed shots.

We reviewed it as finally growing into its own: a ‘proper’ camera. While ultra-wide shots were impressively immersive, the iPhone 11 and 12 Pro’s ultra-wide shots were not sharp enough to capture important memories and moments. With iPhone 13 Pro, it was so thoroughly upgraded that it marked a big shift in quality. As a result, we were not expecting any significant changes to this camera in iPhone 14 Pro.

Color us surprised: with iPhone 14 Pro’s ultra-wide camera comes a much larger sensor, a new lens design and higher ISO sensitivity. While the aperture took a small step back, the larger sensor handily offsets this.

Ultra-wide lenses are notorious at being less than sharp, as they have to collect an incredible amount of image at dramatic angles. Lenses—being made of a transparent material—will refract light at an angle, causing the colors to separate. The wider the field of view of the lens, the more challenging it is to create a sharp image as a result.

The iPhone’s ultra-wide camera has a very wide lens. ‘Ultra-wide’ has always lived up to its name: with a 13mm full-frame equivalent focal length it almost approximates the human binocular field of view. GoPro and other action cameras have a similar field of view, allowing a very wide angle for capturing immersive video and shots in tight spaces. These cameras have not traditionally been renowned for optical quality, however, and we’ve not seen Apple challenge that norm. Is this year’s camera different?

Advertisement

Apple did not go into great detail about the camera hardware changes here, but thankfully Techinsights took the entire camera package apart. The new sensor, at 40mm², is almost 50% larger than the iPhone 13 Pro’s 26.9 mm² sensor. While its aperture is slightly ‘slower’ (that is, smaller) the larger sensor compensates.

How does the sensor and lens change stack up in practice? The iPhone 14 Pro easily outperforms the previous iPhone, creating far sharper shots:

In multiple shots in various conditions, the shots were far more detailed and exhibited less visible processing than comparable iPhone 13 Pro shots:

This is a small center-crop of a larger ultra-wide shot, taken in Bhutan. iPhone 13 Pro on the left, iPhone 14 Pro on the right. iPhone 14 Pro exhibits far more detail, especially in darker areas. Both shots captured in ProRAW.

A nice side benefit of a larger sensor is a bit more shallow depth of field. At a 13mm full-frame equivalent, you really can’t expect too much subject separation, but this shot shows some nice blurry figures in the background.

Bhutan’s Himalayan roads are among the most dangerous in the world. This muddy traverse had me dodging rockfalls, ankle-deep mud, lots of rain and even a Himalayan Black Bear — but the iPhone 14 Pro camera fared very well, shrugging off a motorcycle crash that destroyed my camera lens. Image is about a ⅓ crop of a full ultra-wide shot.

I believe that both Apple’s new Photonic Engine and the larger sensor are contributing to a lot more detail in the frame, which makes a cropped image like the above shot possible. The ‘my point of view’ perspective that the ultra-wide provides puts you in the action, and a lack of detail makes every shot less immersive.

Is it all perfect? Well, comparably, with the main camera getting ever-better, it is still fairly soft and lacking detail. Its 12 megapixel resolution is suddenly feeling almost restrictive. Corners are still highly distorted and soft at times, despite excellent automatic processing from the system to prevent it from looking too fish-eye like.

Low Light

One thing that we really wanted to test was the claims of low-light performance. Across the board, Apple is using larger sensors in the iPhone 14 Pro, but it also claims that its new Photonic Engine processing can offer an ‘over 2× greater’ performance in low light.

Looking at an ultra-wide image in low light, we are seeing… some improvements:

Advertisement

I shot various images in all sorts of lighting, from mixed lighting in daylight to evening and nighttime shots and found that the ultra-wide camera still did not perform very amazing in low light. The shots are good, certainly: but the image processing is strong and apparent, getting sharp shots can be tricky, and noise reduction is very visible.

With Night Mode enabled, processing looked equally strong between iPhone 13 Pro and iPhone 14 Pro.

Those coming from the iPhone 11 Pro will recognize the classic ‘Night mode rendering’ in this image from the iPhone 14 Pro’s ultra-wide camera.

Compared to the iPhone 13 Pro, we’re struggling to see a 2 or 3 time improvement.

I’d find it challenging either way to quantify an improvement as simply as ‘2×’ of a camera’s previous performance. In terms of hardware, one can easily say a camera gathers twice as much light, but software processing is much more subjective. Perhaps to you, this is twice as good. For others, maybe not.

I’ll say, however, that it’s understandable: this is a tiny sensor, behind a tiny lens, trying its best in the dark. I wouldn’t expect fantastic low light shots. Since the ultra-wide camera does not gain a lens or sensor that is twice as fast or large, we can’t judge this as being a huge leap. It’s a solid improvement in daylight, however.

Overall I consider this camera a solid upgrade. A few iPhone generations ago, this would have been a fantastic main camera, even when cropped down to the field of view of the regular wide camera. It is a nice step up in quality.

A word about macro.

iPhone 13 Pro had a surprise up its sleeve last year, with a macro-capable ultra-wide camera. Focusing on something extremely close will stress a lens and sensor to its fullest, which is a great measure of how sharp it can capture images. iPhone 13 Pro’s macro shots we took were often impressive for a phone, but quite soft. Fine detail was not preserved well, especially when using Halide’s Neural Macro feature which further magnified beyond the built-in macro zoom level.

Advertisement
A regular macro shot on iPhone 14 Pro

Macro-heavy photographers will rejoice in seeing far more detail in their macro shots. This is where the larger sensor and better processing makes its biggest leaps; it is a very big upgrade for those that like the tiny things.

Main

Since 2015, the iPhone has had a 12 megapixel main (or ‘wide’) camera. Well ahead of Apple’s event last month, there had been rumors that this long era — 7 years is a practical century in technology time — was about to come to an end.

Indeed, iPhone 14 Pro was announced with a 48 megapixel camera. Looking purely at the specs, it’s an impressive upgrade: the sensor is significantly larger in size, not just resolution. The lens is slightly slower (that is, its aperture isn’t quite as large as the previous years’) but once again, the overall light-gathering ability of the iPhone’s main shooter improves by as much as 33%.

This kind of sensor size change is what we’d call a big deal. Image from Tech Insight’s preliminary iPhone 14 Pro sensor analysis.

The implications are clear: more pixels for more resolution; more light gathered for better low light shots, and finally, a bigger sensor for improvements in all areas including more shallow depth of field.

There’s a reason we chase the dragon of larger sensors in cameras; they allow for all our favorite photography subjects: details, low light and night shots, and nice bokeh.

Quad Bayer

Your average digital camera sensor has a particularly interesting way to capture color. We’ve previously detailed this in our post about ProRAW, which you can read here, but we’ll quickly go over it again.

Advertisement

Essentially, a camera sensor contains tiny pixels that can detect how much light comes in. For a dark area of an image, it will register darkness accurately, and vice versa for lighter areas. The trick comes in sensing color; to get that in a shot, sensors have colored red, blue and green lenses on these tiny pixels.

Every group of this mosaic of ‘sub-pixels’ can be combined with smart algorithms to form color information, yielding a color image. All but one of the iPhone 14 Pro’s sensors look like this:

Every ‘pixel’ essentially being composed of one blue, one red and two green subpixels. The iPhone 14 Pro’s main camera, however, puts four times as many pixels in that same space:

You might be wondering why, exactly, it would not squeeze in four times as many pixels in an equally ‘mosaicked’ pattern, instead opting for this four-by-four pattern known as ‘Quad Bayer’.

While we can dedicate an entire post to the details of camera sensors, allow me to briefly explain the benefits.

Apple has a few main desires when it comes to its camera. Resolution was not the only reason to move to a more advanced, larger and higher pixel-count sensor. Apple desires speed, low noise, light gathering ability, dynamic range (that is, detail in both shadow and light areas) and many more things from this sensor.

Advertisement

As we mentioned before, algorithms are required to ‘de-mosaic’ this color information. In terms of sheer speed, a Quad Bayer sensor can quickly combine these four ‘super subpixels’ of single color into one, creating a regular 12 megapixel image. This is critical for say, video, which needs to read out lower resolution images at a very high rate from the camera. This also lowers noise; sampling the same part of the image with four different pixels and combining the result is a highly effective way to reduce any random noise in the signal.

But the benefits don’t end there: Quad Bayer sensors are capable of using different sensitivities for dark and light values of the color they are sensing to achieve better detail in shadows and highlights. This is also known as ‘dynamic range’.

Imagine if you are taking a shot of your friend in a dark tent with a bright sky behind them. Any camera would struggle to capture both the dark shadows of your friend and the bright sky beyond — a problem most modern phone cameras struggle with. Today, your phone will try to solve this by taking several images that are darker and brighter and merging the result in a process known as ‘HDR’ (High Dynamic Range).

The quad-bayer sensor allows the camera to get a head start on this process by capturing every pixel at different levels of brightness to allow for an instant HDR capture. This cuts down on processing needed in images to correct for image merging artifacts, like moving subjects, which has always been tricky in modern smartphone photography processing that requires combining several photos to improve image quality.

We do have to end this short dive into the magical benefits of the quad bayer sensor with a small caveat: the actual resolution of a quad-bayer sensor is not as high as that of a ‘proper’ 48 megapixel sensor with a regular (or ‘bayer’) mosaic arrangement when shot at 48 megapixels. On paper, its greatest benefits come from merging its 48-megapixel-four-up mosaic into 12 megapixel shots.

Advertisement

Thus, I was really expecting to see nice, but overall unimpressive 48 megapixel shots — something that seemed additionally likely on iPhone 14 Pro, as Apple chose to keep the camera at 12 megapixels almost all the time, only enabling 48 megapixel capture when users go into Settings to enable ProRAW capture.

But let’s say you enable that little toggle…

48 Delicious Megapixels

I shoot a lot of iPhone photos. In the last five years, I’ve taken a bit over 120,000 photos — averaging at least 10,000 RAW shots per iPhone model. I like to take some time — a few weeks, at least — to review a new iPhone camera and go beyond a first impression.

I took the iPhone 14 Pro on a trip around San Francisco and Northern California, to the remote Himalayas and mountains of the Kingdom of Bhutan, and Tokyo — to test every aspect of its image-making, and I have to say that I was pretty blown away by the results of the main camera.

Advertisement

While arguably, a quad-bayer sensor should not give true 48-megapixel sensor resolution as one might get from, say, a comparable ‘proper’ digital camera, the results out of the iPhone 14 Pro gave me chills. I have simply never gotten image quality like this out of a phone. There’s more here than just resolution; the way the new 48 megapixel sensor renders the image is unique and simply tremendously different than what I’ve seen before.

These photos were all shot in Apple’s ProRAW format and further developed in Lightroom or other image editors like Apple’s own Photos app. I was very impressed with the latitude of these files; I tend to expose my image for the highlights, shooting intentionally dark to boost shadows later, and these files have plenty of shadow information to do just that:

Detail-wise, we can see quite easily that it’s an incredible leap on previous iPhones, including the iPhone 13 Pro:

It helps that Bhutan is essentially a camera test chart, with near-infinite levels of detail in their decorations to pixel-peep:

Comparing to the iPhone 13 Pro, the comparison seems downright unfair. Older shots simply don’t hold a candle to it. The shooting experience is drastically different: larger files and longer capture times do hurt the iPhone 14 Pro slightly — from a 4-second delay between captures in proper ProRAW 48 megapixel mode to the occasional multi-minute import to copy over the large files — but the results are incredible.

This is an entirely subjective opinion, but there’s something in these images. There’s a difference in rendering here that just captures light in a new way. I haven’t seen this in a phone before. I am not sure if I can really quantify it, or even describe it. It’s just… different.

Take this photo, which I quickly snapped in a moment on a high Himalayan mountain pass. While a comparable image can be seen in the overview of ultra-wide shots, the rendering and feel of this shot is entirely different. It has a rendering to it that is vastly different than the iPhone cameras that came before it.

Is this unique to those 48 megapixel shots? Perhaps: I did not find a tremendous boost in noise or image quality when shooting in 12 MP. I barely ever bothered to drop down to 12 MP, instead coping with the several-second delay when capturing ProRAW 48 MP images. But above all, I found a soul in the images from this new, 48-megapixel RAW mode that just made me elated. This is huge — and that’s not just the file size I am talking about. This camera can make beautiful photos, period, full stop. Photos that aren’t good for an iPhone. Photos that are great.

And yes: 48 megapixel capture is slow. We’re talking up to 4 seconds of capture time slow. While slow, the 48 MP image rendering and quality is worth the speed tradeoff to me. For now, it is also worth noting that iPhone 14 Pro will capture 10-bit ProRAW files only — even third party apps cannot unlock the previous 12-bit ProRAW capture mode, nor is ‘regular’ native RAW available at that resolution.

Halide and other apps can, however, shoot 48 MP direct-to-JPG, which is significantly faster and gives you fantastically detailed shots.

I understand Apple’s choice to omit a ‘high resolution’ mode as it would simply complicate the camera app needlessly. Users wouldn’t understand why key components of the iPhone camera like Live Photos or other things stopped working, why their iCloud storage would suddenly be full with four-times-larger files, and see little benefit unless zooming deeply into their shots.

But then… without that 48 megapixel resolution, I would’ve thought this butterfly in the middle of the frame would’ve been a spec of dust on my lens. And I wouldn’t get this wonderful, entirely different look. This isn’t just about the pixels but about what’s in the pixels — a whole image inside your image.

I think the 12 MP shooting default is a wise choice on Apple’s part, but it does mean that the giant leap in image quality on iPhone 14 Pro remains mostly hidden unless you choose to use a third party app to shoot 48 MP JPG / HEIC images or shoot in ProRAW and edit your photos later. Perhaps this is the first iPhone that truly puts the ‘Pro’ in ‘iPhone Pro’. If you know, you know.

Natural Bokeh

A larger sensor means better bokeh. It’s simply a fact. The reason for this is that a larger image field can allow for a more pronounced depth of field effect. In my testing, I found that the iPhone 14 Pro produces pleasing, excellent natural bokeh on close-focus subjects:

Advertisement

Unfortunately, you will see this nice natural background blur in fewer situations. The minimum focus distance of the main camera has changed to 20 cm or a little under eight inches — a few centimeter less than the iPhone 13 Pro and any preceding iPhone. In practice, this also means you see the camera switching to its ultra-wide, macro-capable camera much more often. More on that later.

24mm vs 26mm

If you’re an absolute nerd like me, you will notice that your main camera is capturing a wider image. That’s no accident: In re-engineering the lens and sensor package for this new camera, Apple decided to go with a 24mm (full frame equivalent) focal length rather than the old 26mm. It’s impossible to objectively judge this.

At times, the 24mm lens feels very wide, especially towards the edges where it can distort a bit more than previous iPhones. Overall distortion is extremely well controlled, though.

Myself? I’d rather have a tighter crop. I found myself often cropping my shots to capture what I wanted. It could be habit — I have certainly shot enough 26mm-equivalent shots on an iPhone — but I’d honestly love it if it was tighter rather than wider. Perhaps closer to a 35mm equivalent.

Regardless of your own feelings, to offset this wider image, Apple did add a whole new 48mm lens. Or did they?

On The Virtual 2× Lens

I will only add a fairly minor footnote about this in-between lens, as we will be dedicating a future blog post about this interesting new option on iPhone 14 Pro.

Since the main camera now has megapixels to spare, it can essentially crop your shot down to the center of your image, resulting in a 12 megapixel, 48mm-equivalent (or 2×) shot. Apple has included this crop-of-the-main-sensor ‘lens’ as a regular 2× lens, and I think it’s kind of brilliant. The Verge wrote an entire love letter on this virtual lens, touting it as better than expected.

Advertisement

I agree. It’s a great option, and it felt sorely needed. For most users, 3× is a fairly extreme zoom level. The 2× 48mm field of view is an excellent choice for portraits, and a smart default for the native camera’s Portrait mode. Overall, I found cropping my 48MP RAW files by hand better than taking 2× shots in practice. For everyday snapshots, however, it’s practically indispensable. This feels like laying the groundwork for a far longer, more extreme telephoto zoom in the iPhone future.

A note on LIDAR

While Apple did not mention or advertise it, the LIDAR sensor on iPhone 14 Pro has been slightly altered. It matches the new, wider 24mm focal length of the main camera in its dot coverage. Depth sensing seems identical in quality and speed compared to previous iPhones.

Frustratingly, this LIDAR sensor does not seem to improve its previous behaviors. While I am sure it aids autofocus speed and depth capture, it is still an exercise in frustration to attempt a shot through a window with the camera.

An attempt to take a quick shot through a glass elevator is foiled by the LIDAR seeing a solid wall. LIDAR uses near-visible light to sense depth, which aids autofocus. It is easily fooled by airplane windows, water and other transparent films.

As the LIDAR sees the glass pane as opaque — whether it is a room window, a glass pane, an airplane window, etc — it will focus the camera on it, rather than the background. I end up using manual focus in Halide, as there is no toggle to turn off LIDAR-assisted focus. I hope a future A17 chip has ML-assisted window detection.

Low Light

As I mentioned in our ultra-wide overview, I was somewhat skeptical of the claims of leaps in low-light performance. I still prefer to shoot little at night and in low light with phones, as night mode is a nice trick but still results in overly-processed images to my personal taste.

Advertisement

The low-light gains on the main camera are very noticeable, however. It’s particularly good in twilight-like conditions, before Night mode would kick in. I found the detail remarkable at times.

Night falls near Thrumshing La in Bhutan. This blue misty photo was taken after the sun had set, in conditions I’d consider terrible for cellphone photography. It came out crisp.
Detail on the 48 MP ProRAW is crisp, even with edits. Note that Night Mode drops the camera to 12 MP, with more obvious processing.

There’s certainly great improvements being made here, and I can see that Apple has done a lot to improve processing of low-light shots to be less aggressive. It pays off to stick to 48 megapixels here, where noise reduction and other processing artifacts are simply less pronounced:

I found the camera to produce very natural and beautiful ProRAW shots in situations where I previously almost instinctively avoided shooting. There’s some mix happening here of the camera picking the sharpest frame automatically, processing RAW data with the new Photonic Engine and having a larger sensor that is making some real magic happen.

Finally, if you have the time and patience, you can go into nature and do some truly stunning captures of the night sky with Night mode.

While true astrophotography remains the domain of larger cameras with modified sensors, it is utterly insane that I did this by propping my iPhone against a rock for 30 seconds:

This is gorgeous. I love that I can do this on a phone — it kind of blows my mind. If you’d shown me this image a mere five years ago, I wouldn’t have believed it was an iPhone photo.

The flip side of this is that we have now had Night Mode on iPhone for 4 years, and it still sorely lacks advanced settings — or better yet, an API.

For an ostensible ‘pro camera’, the interface for Night Mode has to walk the fine line of being useable for novices as well as professional photographers. We’d love to add Night Mode to Halide, along with fine grained controls. Please, Apple: allow apps to use Night Mode to capture images and adjust its capture parameters.

Advertisement

Telephoto

Last but very much not least: the telephoto camera. While I’ve said above that for most users, the 3× zoom factor (about 77mm for a full-frame camera equivalent lens focal length) is a bit extreme, it is by far my favorite camera on the iPhone.

If I go out and shoot on my big cameras, I rock a 35mm and 75mm lens. 75mm is a beautiful focal length, forcing you to focus on little bits of visual poetry in the world around you. It is actually fun to find a good frame, as you have to truly choose what you want in your shot.

My grave disappointment with the introducing of the 3× lens last year was that it seemed coupled with an older, underpowered, small sensor. Small sensors and long lenses do not pair well: as a long lens gathers less light, the small sensor dooms it to be a noisy camera by nature. Noisy cameras on iPhones will make smudgy shots, simply because they receive a lot of noise reduction to produce a ‘usable’ image.

Therefore, I was a bit disappointed in seeing no announcement of a sensor size or lens upgrade on iPhone 14 Pro’s telephoto camera. This part of the system, which so badly craved a bump in light collection ability, seemed like it had been skipped over another generation.

Color my extreme surprise using it and finding it dramatically improved — possibly as much as the ultra-wide in practical, everyday use.

Advertisement

Compare this shot from the iPhone 13 Pro to the exact same conditions on iPhone 14 Pro:

iPhone 13 Pro left, iPhone 14 Pro right. There’s almost no comparing the two: in every shot, the iPhone 14 Pro packs in sharp, clear shots where the iPhone 13 Pro often produces far more muddled images at 3×.

While these two cameras ostensibly pack the same size sensor and exact same lens, the processing and image quality on the iPhone 14 Pro is simply leagues ahead. Detail and color are far superior. On paper, Apple has the exact same camera in these two phones — which means a lot of praise here has to go to their new Photonic Engine processing that seems to do a much nicer job at processing the images out of the telephoto camera and retaining detail.

At times, it truly blows you away:

A dark forest in the evening, a fast-moving motorcycle and a hastily taken photo — this wasn’t really the ideal domain for a 77mm-equivalent lens and yet, it pulled it off fantastically.

Low light is improved, with much better detail retention and seemingly a better ability to keep the image sharp, despite its long focal length and lack of light at night.

To have a long lens on hand is one of the Pro-line of iPhone’s selling points, and if you value photography, you should appreciate it appropriately.

There’s certainly still times where it sadly misses, but overall, this is an incredible upgrade — seemingly entirely out of left field. Apple could’ve certainly advertised this more. I wouldn’t call this a small step. It has gone from a checkmark on a spec sheet to a genuinely impressive camera. Well done, Apple.

The Camera System

While this review has looked at individual cameras, iPhones have a different way of treating its camera array. The Camera app unifies the camera system as one ‘camera’: abstracting away the lens switching and particularities of every lens and sensor combination to work as one fluid simple camera. This little miracle of engineering takes a lot of work behind the scenes; cameras are precisely matched on micron-precision to each other at the manufacturing stage, and lots of work is done to ensure white balance and rendering consistency between them all.

Advertisement

The biggest factor determining image quality for this unified camera for the vast majority of users is no longer the sensor size or lens, but the magic Apple pulls off behind the scenes to both unify these components and get higher quality images out of the data they produce. Previously, Apple had several terms for the computational photography they employed: Smart HDR and Deep Fusion were two of the processes they advertised. These processes, enabled by powerful chips, capture many images and rapidly merge them to achieve higher dynamic range (that is, more detail in the shadows and highlights), more accurate color, extract more texture and detail while also using artificial intelligence to separate out various areas of the image to apply enhancements and noise reduction.

The result of all this is that any image from an iPhone comes out already highly edited compared to the RAW data that the camera actually produces.

Processing

If a camera’s lens or sensor is an aspect worth reviewing of a camera, then so is a modern phone camera’s processing. Beginning years ago, iPhones have started to merge many photos into one for improving dynamic range, detail and more — and this processing is different with every iPhone.

Starting on iPhone 13 Pro, we’ve started to really see mainstream complaints about iPhone camera processing. That is, people were not complaining about the camera taking blurry images at night because of a lack of light, or missing autofocus, but seeing images with odd ‘creative decisions’ taken by the camera.

Ever since our look at iPhone 8, we noted a ‘watercolor effect’ that rears its head in images when noise reduction is being applied. At times, this was mitigated by shooting in RAW (not ProRAW), but as the photography pipeline on iPhones has gotten increasingly complex, pure RAW images have deteriorated.

Advertisement

iPhone 13 Pro would rank up with the iPhone XS in our iPhone camera lineup as the two iPhones that have the heaviest, most noticeable ‘processed look’ on their shots. Whether this is because these cameras tend to take more photos at higher levels of noise to achieve their HDR and detail enhancement or other reasons, we cannot be certain of, but they have images that simply come out looking more heavily ‘adjusted’ than any other iPhone:

The two selfies shot above on iPhone 13 Pro demonstrate an odd phenomenon: the first shot is simply a dark, noisy selfie taken in the dark. Nobody is expecting a great shot here. The second sees bizarre processing artifacts where image processing tried to salvage the dark shot, resulting in an absurd watercolor-like mess.

With this becoming such a critical component of the image, we’ve decided to make it a separate component to review in our article.

For those hoping that iPhone 14 Pro would do away with heavily processed shots, we have some rather bad news: it seems iPhone 14 Pro is, if anything, even more hands-on when it comes to taking creative decisions around selective edits based on subject matter, noise reduction and more.

This is not necessarily a bad thing.

Advertisement

You see, we liken poor ‘smart processing’ to the famous Clippy assistant from Microsoft Office’s early days. Clippy was a (now) comical disaster: in trying to be smart, it was overly invasive in its suggestions. It would make erroneous assumptions, interpreting both the wrong inputs and outputting bad corrections. If your camera does this, you will get images that are unlike what you saw and what you wanted to capture.

Good processing, on the other hand, is magic. Examples include night mode, which is the heaviest processing of images that your iPhone can do to achieve what you can easily see with the naked eye: a dimly lit scene. Tiny cameras with small lenses simply do not capture enough light to capture this, and ‘stitching’ together many shots taken in quick succession, stabilizing them in 3D space, and even using machine learning to guess their color allows for a magical experience.

iPhone 14 Pro vastly improves on iPhone 13 Pro in this department by having a far higher success rate in its processing decisions.

There’s still times where iPhone 14 Pro misses, but at a much lower rate than the iPhone 13 Pro and preceding iPhones. If you’re like me, and prefer to skip most of this processing, there’s some good news.

48MP Processing

Of note is that the 48 megapixel images coming out of the iPhone 14 Pro exhibit considerably less ‘heavy handed’ processing than the 12 megapixel shots. There’s a number of reasons this could be happening: one is that by virtue of it being four times the data to process, the system cannot possibly achieve as much processing as a lower resolution shot.

The other likely reason you notice less processing is that it is a lot harder to spot the sharper edges and smudged small details on a higher resolution image. In our iPhone 12 Pro Max review, I noted that we are running to the limitations of the processing’s resolution; a higher resolution source image indeed seems to result in a more natural look. But one particular processing step being skipped is much more evident.

Advertisement

It appears 48 MP captures bypass one of my personal least favorite ‘adjustments’ that the iPhone camera now automatically applies to images taken with people in it. Starting with iPhone 13 Pro, I noticed occasional backlit or otherwise darkened subjects being ‘re-exposed’ with what I can only call a fairly crude automatic lightness adjustment. Others have noticed this, too — even on iPhone 14 Pro:

This is a ‘clever’ step in iPhone photography processing: since it can super-rapidly segment the image in components, like human subjects, it can apply selective adjustments. What matters is the degree of adjustment and its quality. I was a bit disappointed to find that this adjustment seems to be equally heavy-handed as the previous iPhone: I have honestly never seen it make for a better photo. The result is simply jarring.

A second, but rare artifact I have seen on the 48 megapixel ProRAW files (and to a less extent, 48 megapixel JPG or HEIC files shot with Halide) has been what seems like processing run entirely amok. At times, it might simply remove entire areas of detail in a shot, leaving outlines and some details perfectly intact.

Compare these shots, the first taken on iPhone 13 Pro and the second on an iPhone 14 Pro:

The second shot is far superior. Neglecting even the resolution, the detail retained in the beautiful sky gradient, the streets and surroundings is plainly visible. There’s no comparison: the iPhone 14 Pro shot looks like I took it on a big camera.

Yet… something is amiss. If I isolate just the Shinjuku Washington Hotel and KDDI Building in the bottom left of the image, you can see what I mean:

Advertisement
iPhone 13 Pro on the left, iPhone 14 Pro on the right.

You can spot stunning detail in the street on the right image — utterly superior rendering until the road’s left side, at which point entire areas of the building go missing. The windows on the Washington Hotel are simply gone on the iPhone 14 Pro, despite the outlines of the hotel remaining perfectly sharp.

I haven’t been able to reproduce this issue reliably, but occasionally this bug slips into a capture and shows that processing can sometimes take on a life of its own — assuming the brush and drawing its own reality. Given that this is the first 48-megapixel camera from Apple, some bugs are to be expected, and it’s possible that we’ll see a fix for this in a future update.

Lens Switching

When we released Halide for the iPhone X, some users sent in a bug report. There was a clear problem: our app couldn’t focus nearly as close as Apple’s camera app could. The telephoto lens would flat-out refuse to focus on anything close. We never fixed this — despite it being even worse on newer iPhones. We have a good reason for that.

The reason for this is a little cheat Apple pulls. The telephoto lens actually can’t focus that close. You’d never know, because the built-in app quickly switches out the regular main camera’s view instead, cropped in to the same zoom as the telephoto lens. Apple actually does this in other situations too, like when there is insufficient light for the smaller sensor in the telephoto lens.

I believe that for most users, this kind of quick, automatic lens switching provides a seamless, excellent experience. Apple clearly has a vision for the camera: it is a single device that weaves all lenses and hardware on the rear into one simple camera the user interacts with.

Advertisement

This starts being a bit less magical and more frustrating as a more demanding user. Apple seems to have some level of awareness about this: the camera automatically switching to the macro-capable lens led to some frustration and confusion when announced, forcing them to add a setting that toggles this auto-switching behavior to the camera app.

While that solves the macro-auto switch for more demanding users, the iPhone 14 Pro still has a bad habit of not switching to the telephoto lens in a timely fashion unless you use a third party camera app. This combines very poorly with the telephoto lens now having a 3× zoom factor rather than the previous 2×; the resulting cropped image is a smudgy, low-detail mess.

Even with ProRAW capture being enabled, the camera app will still yield surprise-cropped shots that you assumed were telephoto camera captures.

On the left is an image captured while in ‘3×’ in Apple Camera with ProRAW enabled, with the right image taken a few seconds later. The left image is actually a crop of the 1× camera, despite the app claiming I was using the telephoto lens. It took several seconds for it to finally switch to the actual camera.

This problem is by no means limited to iPhone: even the latest and greatest Pixel 7 seems to have this behavioral issue, which shows that no matter how ‘smart’ computational photography gets, its cleverness will always be a possibly annoying frustration when it makes the wrong decisions.

Advertisement

Don’t get me wrong: I think most of these frustrations with the automatic choices that Apple makes for you are a photographer’s frustrations. Average users are almost always benefitting from automatic, mostly-great creative choices being made on their behalf. Third-party camera apps (like ours) can step in for most of these frustrations to allow for things like less processed images, 48 megapixel captures and more — but sadly, are also beholden to frustrating limitations, like a lack of Night mode.

I would encourage tech press and reviewers to start treating image processing and camera intelligence as much as a megapixel or lens aperture bump in their reviews; it is affecting the image a whole lot more than we give it credit for.

Conclusion

I typically review the new iPhones by looking at their general performance as a camera. It’s often disingenuous to compare an iPhone’s new camera to that of the iPhone that came a mere year before it; the small steps made can be more or less significant, but we forget that most people don’t upgrade their phones every year. What matters is how the camera truly performs when used.

This year is a bit different.

Advertisement

I am reminded of a post in which John Gruber writes about a 1991 Radio Shack ad (referencing this 2014 post by Steve Cichon). On this back-of-the-newspaper ad, Radioshack advertised 15 interesting gadgets for sale. Of these 15 devices, 13 are now always in your pocket — including the still camera.

While this is true to an extent, the camera on a phone will simply never be as good as the large sensors and lenses of big cameras. This is a matter of physics. For most, it is approaching a level of ‘good enough’. What’s good enough? To truly be good enough, it has to capture reality to a certain standard.

One of the greatest limitations I ran into as a photographer with the iPhone was simply their reproduction or reality at scale. iPhone photos looked good on an iPhone screen; printed out or when seen on even an iPad-sized display, their lack of detail and resolution made them an obvious cellphone capture. Beyond that, it was a feeling; a certain, unmistakable rendering that just didn’t look right.

With iPhone 14 Pro, we’re entering the 48 megapixel era of iPhone photography. If it was mere detail or pixels, it would mark a step towards the cameras from the Radio Shack ad.

Advertisement

But what Apple has delivered in the iPhone 14 Pro is a camera that performs in all ways closer to a ‘proper’ camera than any phone ever has. At times, it can capture images that truly render unlike a phone camera — instead, they are what I would consider a real photo, not from a phone, but from a camera.

That’s a huge leap for all of us with an iPhone in our pocket.

Source link

Advertisement
Continue Reading
Advertisement
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Technology

Oracle keeps AI focus with database updates, new data lake

Published

on

Oracle keeps AI focus with database updates, new data lake

Oracle on Tuesday unveiled a spate of new capabilities for its HeatWave database aimed at better enabling customers to develop generative AI capabilities in Oracle Cloud Infrastructure.

New features — among many others — include batch processing for using large language models (LLMs) to respond to user queries and automatic vector store updating in HeatWave GenAI, the addition of bulk ingest capabilities to HeatWave MySQL, and the ability to store and process larger models in HeatWave AutoML.

Together, the new HeatWave features address critical needs as enterprise interest in developing AI models and applications, including generative AI, continues to increase, according to Holger Mueller, an analyst at Constellation Research.

In particular, improvements to vector search and storage are significant.

Advertisement

This release is all about making it easier for developers to use vector capabilities inside HeatWave. Basically, Oracle needs to make sure that the data content in HeatWave is available and it is easy for developers to use the vector support. If [Oracle] succeeds, the future of HeatWave in the AI era is set.
Holger MuellerAnalyst, Constellation Research

“This release is all about making it easier for developers to use vector capabilities inside HeatWave,” Mueller said. “Basically, Oracle needs to make sure that the data content in HeatWave is available and it is easy for developers to use the vector support. If [Oracle] succeeds, the future of HeatWave in the AI era is set.”

In addition to adding new HeatWave features, Oracle introduced new industry-specific applications for Oracle Fusion Data Intelligence, Intelligent Data Lake for Oracle Data Intelligence and Generative Development (GenDev), a new application development infrastructure for developing AI applications that combines tools in Oracle Database 23ai.

Each, like the new HeatWave features, focuses on better enabling customers to use AI as part of the decision-making process. Similarly, new integrations with Informatica and Microsoft Azure address generative AI development.

The new capabilities were revealed during Oracle CloudWorld, the vendor’s user conference in Las Vegas.

Advertisement

Based in Austin, Texas, Oracle is a tech giant that provides a broad spectrum of data management and analytics capabilities, including a variety of database options.

HeatWave GenAI was first launched in June, while recent platform updates include adding vector search to Oracle Database 23ai in May and the July unveiling of Exascale, a new architecture for the cloud that will become the Oracle Database infrastructure.

Heating up

HeatWave is a MySQL database that that allows customers to query and analyze data within the database environment so that they don’t have to extract, transform and load data before using it to inform decisions.

Competing platforms include Amazon Redshift, Databricks, Google BigQuery, Snowflake and Teradata.

Advertisement

HeatWave GenAI is a feature within HeatWave and is designed to enable users to build AI models and applications using the data stored in the database. Capabilities included when the feature was initially launched were in-database LLMs, automated in-database vector storage, scalable vector search and HeatWave Chat, an AI-powered assistant that enables users to have natural language interactions with data.

LLM inference batch processing aims to help developers improve application throughput by executing multiple requests simultaneously, rather than just one at a time. Automatic vector store updating, meanwhile, provides AI application developers with the most current data available by automatically updating object storage.

More new HeatWave GenAI features include multilingual support so that similarity searches can be performed on documents in any of 27 languages when developing applications, support for optimal character recognition so developers can include scanned content saved as images when training applications, and JavaScript support to more easily let users build AI chatbots.

Like Mueller, Shawn Rogers, an analyst at BARC US, noted that the new HeatWave GenAI features add significant value because they help simplify developing AI models and applications.

Advertisement

“Heatwave GenAI enables customers to de-risk AI-driven projects through a highly integrated service that removes much of the complexity surrounding creating AI applications,” he said. “Built-in LLMs and easy vector store creation help customers avoid do-it-yourself pitfalls without [requiring] extensive AI skill sets.”

In particular, automated vector store updating is a significant addition, Rogers continued, calling it “an excellent feature in Heatwave.”

Beyond HeatWave GenAI, Oracle updated numerous other HeatWave database features. Highlights include the updates to HeatWave Lakehouse and AutoML, according to Mueller.

New HeatWave Lakehouse capabilities include the ability to write results to object storage so that users can more easily and cost efficiently share and store query results. Also included is automatic change propagation to ensure that users always have access to the most up-to-date data.

Advertisement

New HeatWave AutoML features include increasing capacity so users can train larger machine learning models than was previously possible, data drift detection so developers can know when models need to be retrained, and topic modeling that enables users to more easily discover insights in their text data.

“HeatWave Lakehouse is critical,” Rogers said. “[It enables users] to combine HeatWave and lakehouse data, which is key because enterprises need to rely on lakehouses for insights, and even more with AI. And the HeatWave AutoML [update] is very important to keep down the cost of a more powerful — but therefore also more complex — database.”

In addition to new HeatWave capabilities, Oracle revealed that a free version of the database is now available in the Oracle Cloud Infrastructure (OCI) Always Free Service, enabling organizations to get started with the database by developing and running small applications at no cost.

A circular graph showing the top seven benefits of generative AI for businesses.
Enterprises might realize these seven benefits when using generative AI.

Other new capabilities

Oracle’s HeatWave updates, many designed to better enable developers to build AI models and applications, are just one aspect of the tech giant’s push to improve the AI development experience for its customers.

Another development is its plan to develop and deliver Oracle Intelligent Data Lake as a foundational part of the Oracle Data Intelligence Platform.

Advertisement

Oracle expects Intelligent Data Lake to be available on a limited basis at some point in 2025. Once available, its aim will be to combine data orchestration, warehousing, analytics and AI in a unified environment powered by the OCI to more easily enable customers to use data from diverse sources.

Data is growing at an exponential rate. So is the complexity of data and the number of sources from which data is collected. Tools that address that volume and complexity with more advanced capabilities than those built to handle the lower data volumes and more simplistic data of the past are the appropriate next step for vendors such as Oracle, according to Rogers,

“The upcoming addition of Oracle Intelligent Data Lake is a logical step forward for the company,” he said. “Nearly all enterprise customers have a highly diverse data ecosystem, and the integration of Oracle’s data intelligence platform and OCI clearly provides additional flexibility and function. Customers optimizing their architecture to take advantage of AI will also benefit.”

Specific features of Oracle Intelligent Data Lake include generative AI-powered experiences to enable conversational data analysis and code generation, integration capabilities that enable users to combine structured and unstructured data, a data catalog, Apache Spark and Apache Flink for data processing and native integrations with other Oracle platforms, as well as with open source tools.

Advertisement

Like the pending development of Intelligent Data Lake, new AI-powered applications in Fusion Data Intelligence now in preview are aimed at helping Oracle customers derive greater value from their data.

Like many data platform vendors, including Databricks and Snowflake, Oracle has made it a priority to provide users with prebuilt applications specific to individual industries to streamline exploration and analysis.

Now, the tech giant plans to infuse Oracle Cloud Human Capital Management (HCM) and Oracle Cloud Supply Chain Management (SCM) with AI capabilities to further improve the time it takes to reach insights in what Rogers called a “meaningful way.”

A new tool in HCM called People Leader Workbench is designed to help organizations achieve business goals by adapting their talent strategy to changing business needs. Meanwhile, a new tool in SCM called the Supply Chain Command Center aims to provide recommendations that better enable organizations to quickly respond to changing supply, demand and market conditions.

Advertisement

“Many companies have long found the time gap between insight and action challenging,” Rogers said. “Fusion Data Intelligence … helps Oracle clients close that gap in a meaningful way. Intelligent AI-powered applications are critical for companies looking to deploy AI in business systems for faster, accurate and actionable insights.”

Finally, GenDev is intended to provide customers with a cohesive environment for generative AI application development by combining previously disparate tools in Oracle Database 23ai and adding new features.

Among the new features are support for more LLMs including integrations with Google Gemini and Anthropic Claude, improved retrieval-augmented generation (RAG) capabilities, access to Nvidia GPUs and synthetic data creation.

Next steps

With Oracle focusing intently on providing customers with the capabilities to develop and deploy generative AI models and applications, Mueller said it’s important that Oracle do so for not only customers deploying on Oracle’s own cloud, but also users of other clouds.

Advertisement

Many large enterprises use different clouds for different operations. In addition, they still keep some data on premises and in private clouds. Therefore, as Oracle scales out its generative AI development capabilities, it needs to do so for users of any cloud infrastructure.

“[Oracle needs to] make sure [deployment] is the same across Azure, Google and more clouds,” Mueller said. “[They need to] provide multi-cloud management tools, dig deeper in functionality. … Whatever the most popular use cases are, Oracle needs adoption.”

Rogers, meanwhile, suggested that Oracle needs to focus more on cost control and clear pricing.

Cloud computing costs were higher than many enterprises expected even before the surging interest in generative AI over the past two years. Now, vital functions such as vector search and storage, developing and running RAG pipelines and deploying LLMs are adding new workloads and their corresponding costs.

Advertisement

“Cost control and transparency must be at the forefront of Oracle’s strategy as it continues to add to and integrate its technologies with AI,” Rogers said. “Enabling a wider community of users to leverage AI will require simple cost controls to deliver value.”

Eric Avidon is a senior news writer for TechTarget Editorial and a journalist with more than 25 years of experience. He covers analytics and data management.

Source link

Advertisement
Continue Reading

Technology

The next Like A Dragon game recasts a series regular as an amnesiac pirate

Published

on

The next Like A Dragon game recasts a series regular as an amnesiac pirate

Ryu Ga Gotoku Studio simply cannot stop pumping out Like A Dragon (aka Yakuza) games. The studio and publisher Sega have revealed that the next entry will hit PS4, PS5, Xbox One, Xbox Series X/S and Steam on February 28, just 13 months after debuted. The latest spinoff has a typically kooky twist that’s not exactly kept secret by its title: Like a Dragon: Pirate Yakuza in Hawaii.

A shown at the studio’s features Goro Majima, a regular of the series, explaining what’s been going on with him recently. About six months earlier, Majima washed up on an island near Hawaii with no memory of how he got there, only to be helped out by a child with a pet tiger cub. It didn’t take long until Majima ran afoul of some pirates and swiftly became a pirate captain himself.

Pirate Yakuza in Hawaii takes place a year after the events of Infinite Wealth and Ichiban Kasuga’s exploits in that game. You’ll assemble your crew, upgrade your ship, engage enemy vessels and discover hidden islands. Majima will have two fighting styles that you can switch between on the fly. Opt for the Mad Dog option to vex enemies with “speed, agility and flair,” and then switch to Sea Dog to dual wield short swords and “pirate tools,” according to a press release. However you slice it, Pirate Yakuza in Hawaii already looks way more fun than the 45 minutes I spent playing .

While February 28 isn’t too far away in the grand scheme of things, there are plenty of other Like A Dragon-related things to help keep you occupied in the meantime. , a live-action TV show based on the series, will . The franchise is also with a port of Yakuza Kiwami, a remake of the first game.

Advertisement

Source link

Continue Reading

Technology

The metaverse didn’t die. It moved inside Microsoft Flight Simulator 2024 | Jorg Neumann interview

Published

on

The metaverse didn't die. It moved inside Microsoft Flight Simulator 2024 | Jorg Neumann interview

The scale of the ambition of Microsoft Flight Simulator 2024 is pretty astounding. Made with 800 game developers over four years, the title has a seriously impressive set of numbers.

I got a big download of the ambition at a preview event at the Grand Canyon, where the game makers flew us over the canyon and compared it to the simulation. The flight sim of all flight sims comes out on November 19 on the PC, Xbox Series X/S and GamePass on day one.

One of the most interesting feats is that Microsoft shifted the game’s computing from your local PC to the cloud, said Jorg Neumann, head of Microsoft Flight Simulator, in an interview. The massive amounts of data are computed in the internet-connected data centers and then streaming in real-time to the user’s machine, where the simulation is visualized onscreen.

In the 2020 version, Microsoft had a hybrid structure that streamed data from the cloud and also used the local compute resources on the user’s own machine. That resulted in downloads to your PC of up to half a terabyte, far more than the 23 gigabytes for this year’s game.

Advertisement

Join us for GamesBeat Next!

GamesBeat Next is connecting the next generation of video game leaders. And you can join us, coming up October 28th and 29th in San Francisco! Take advantage of our buy one, get one free pass offer. Sale ends this Friday, August 16th. Join us by registering here.


Microsoft Flight Simulator 2024 is also bringing massive enhancements to the simulated Earth by increasing the detail of its virtual environment by a factor of 4,000. The team built a “digital twin” of the Earth, much like would-be metaverse companies want to do. But this world was built with realistic physics and a huge level of accuracy. It has systems for all things that can affect flight, from ground activity to extreme weather, fuel and cargo, and turbulence. The hot air balloons in the game are simulated across 6,400 surfaces giving a realistic reaction to heat density — when you turn on the heater, the air will heat up, and it’s going to inflate the massive balloon.

The Earth in the flight simulation is really as close to a digital twin of the real planet as has ever been built, Neumann said. I heard a lot about digital twins from Nvidia — it supplies the chips to run simulations that let BMW build a digital twin factory to perfect the design before it builds the factory in real life. And Nvidia ambitiously is building Earth-2, a simulation of the entire world so accurate that it may one day be used to predict climate change for decades to come.

Advertisement

Overhyped and then hated, the metaverse went into hiding, and it’s lurking inside digital twins like BMW factories and Microsoft Flight Simulator 2024. In fact, Neumann said the company got a lot of the data for the photogrammetry of its planet-sized simulation from other enterprises that are digitizing the Earth.

Enhanced digital elevation maps use more than 100,000 square kilometers of countryside photogrammetry to enable visually stunning digital twin experiences. More than 150 airports, 2,000 glider airports, 10,000 heliports, 2,000 points of interest, and 900 oil rigs have been carefully hand-crafted while a procedural system generates all 40,000 airports, 80,000 helipads, 1.5 billion buildings, and nearly 3 trillion trees our planet.

Since the game journalists outnumbered the flight sim leaders, I paired up with Samuel Stone of Den of Geeks to talk with Neumann. Here’s an edited transcript of our interview.

Jorg Neumann is head of Microsoft Flight Simulator.
Jorg Neumann is head of Microsoft Flight Simulator.

GamesBeat: Did this have to start a long time ago in order to get that plane in the game and plan this whole event?

Jorg Neumann: You mean the real-world thing? No, actually not. The CEO of Cirrus, his name is Zean Nielsen. I call him an innovator. He wants to revolutionize how planes are perceived. Most people think of planes as scary things. They’re too far away from their lives. When you look at Cirrus’s commercials, as you’re driving up to an airplane–have you seen these things? Mom and Dad come out, a boy and a girl, and a dog. Then it says, “Here’s your weekend getaway private jet.” Okay, cool? The tone is a very playful, friendly tone. He’s a big believer in Flight Sim.

Advertisement

GamesBeat: I’m astounded that our pilot let Charlie take over.

Neumann: Because they want to show that it’s not scary. In very many ways, it’s like driving a car. It has all these security features. It’s super stable. You flew it. You saw it. It’s super reactive. You really feel in control.

He looks at the world of aviation through the lens of, we need to get more people comfortable with aviation. It has a lot to do with history, specifically in this country. Aviation was a family tradition. Often it was people from the Greatest Generation coming back from the war, becoming crop dusters and things like that. Having private planes, getting their grandkids into private planes, that sparked them to become pilots. That’s fading a little. Getting people back into the dream of aviation and flying is their thing.

You can fly an F/A-18e Super Hornet in the Grand Canyon in Microsoft Flight Simulator.
You can fly an F/A-18e Super Hornet in the Grand Canyon in Microsoft Flight Simulator.

I get phone calls from literally every manufacturer on the planet. “You have to help us with recruitment. There aren’t enough pilots.” The commercial aviation space is lacking 800,000. We know that. There’s not enough transport pilots, not enough passenger pilots. There’s a crisis coming because they’re all aging out. The Level D simulators cost $40 million. There are very few of them. They’re all looking for ways to get people into aviation faster. Then they look at us with 15 million people playing. The quality is good. This is the best recruiting tool ever. They support us however they can. Our relationship with the manufacturers, typically–if I ask them for something, they say, “How else can we help you?”

Samuel Stone: When Flight Sim came back in 2020 it came back bigger than ever, with all of that third-party aviation support. Taking all of that data, all of that feedback, how did that inform the direction you wanted to take 2024?

Advertisement

Neumann: It absolutely informed it. We almost completely reversed the typical way of making a game. Typically you sit there with a bunch of designers in the room and decide stuff. In this case we said, “What do people want? What are their problems? What are their needs?” Our design priorities came from the community. We have our own ideas. Nobody said, “Jorg, put giraffes in the game.” That’s a me thing. But all the serious fundamental stuff came from consumer needs. I feel great about that.

The whole process is healthier, I think. You can easily respond to people, because you already have common ground. They’ve told you what the problems are. We can propose solutions. They give us feedback on those solutions. As we implement we go through with what they actually need. I’ve been making games for 30 years. I’ve never done it this way, and it’s better. I’d never go back.

There more than 900 oil rigs inside Microsoft Flight Simulator 2024.

GamesBeat: I was curious about how you came to embrace digital twins. Nvidia wants to build something to predict climate change in the years to come. They need meter-level accuracy of the earth in order to do that, so they have to build a digital twin of everything. They have their own purposes, but how did you become convinced that this would lead to a better game?

Neumann: The impetus for starting Flight Sim in the first place, back in 2016 when I kicked this off with Phil–I had worked on something called World Explorer on HoloLens. Nobody ever played that because HoloLens is really expensive. But the experience was great. We did Rome. There’s a digital twin of Rome. For that we needed photogrammetry of the city. You could land in the Colosseum and those sorts of things. I was also working on Machu Picchu. We didn’t have a scan for that. It’s complicated. Everything is rounded. A complicated space.

We got to a point where we got those places right. San Francisco was another one. We did about 12 places around the earth. The real impetus was, can we do this on a worldwide level? I remember getting the Seattle scan. I stuck it into the engine. We got a Cessna 172 from Flight Sim 10 and jammed it in. It felt great. I showed it to Phil. We flew over our offices in Redmond. He said, “Why are you showing me this video?” I said, “It’s not a video.” I turned the plane. Yep, it’s real. That showed us it was possible.

Advertisement

The next place we tried was actually the Grand Canyon. We had problems with the digital elevation map. There was popping with the shadows everywhere. The resolution wasn’t good enough. But the reason why I thought Flight Sim was the right vessel for that idea, at the core of it all Flight Sim was always a full representation of the earth. Even if it was just a rectangle and one tower representing Chicago, it wanted to be that.

For any kind of software, when you ask that question with a digital twin–it needs a purpose. A consumer need has to be fulfilled. We have a consumer need. Flight simmers want this. I’m building this digital twin for the flight simmers. Does that mean it’s limited to flight simming? No. But there’s always a need. Now that they can land a helicopter anywhere and walk around, we needed to make it look at least as good as a first-person shooter or something. How do we do that? Again, there’s a need that drives innovation forward.

You can pick a head and customize it in Microsoft Flight Simulator 2024.

GamesBeat: It’s interesting that you’re finding more accurate information than anyone else.

Neumann: We’re pretty relentless at it. When you have 15 million people playing something, that’s a pretty big motivator. We’ll just keep chipping at it.

Advertisement

Stone: Flight Sim isn’t a sprint. It’s a marathon. There’s so much post-release support and content. How is it–not just mapping out what it will be at launch, but what’s it like looking at the future and that post-launch support?

Neumann: We have most of the world stuff. I just got an email that said Tallinn and Riga are ready. I had to set that up two years ago. We had to get a bunch of permissions. We had to convince a flyer to go close to a war zone. It was complicated. Do I know when this stuff will ship? No. It’s the world. The world has its own clock. They don’t wait for Flight Sim. A lot of the release plan has to do with data availability.

Obviously we listen to the community. What does the community say? “Stop doing North America and Europe. What about Brazil?” We’ve been talking to the ATC controller over Sao Paulo for two years, because they control the airspace. If they don’t want to let you fly you won’t fly there. We convinced them. We showed them what we’re doing, why it’s good for society. At some point they say, “Okay, here are your permissions.” We flew Sao Paulo three or four months. It took a month, because it’s huge. Then we got the data. Now we have to process the data, edit the data. At some point we’ll do a world update for Brazil.

That’s how you can think about it. I can’t just snap my fingers and say, “Give me Asia!” I have to talk to a whole bunch of people. And I am talking to them, a shocking number of people who have nothing to do with gaming at all.

Advertisement

GamesBeat: Once you have satellite data, don’t you have all the data you need?

Microsoft Flight Simulator 2024 lets you fly all sorts of planes.
Microsoft Flight Simulator 2024 lets you fly all sorts of planes.

Neumann: The satellite data is a sort of middle ground. With the cities–think about an airplane that flies pretty low over the houses. You have every angle on every house as the airplane passes by. They fly in strips. This is an airplane that flies higher. You get fewer samples. What happens is, some of the back sides, especially depending on the time of day–the back sides aren’t lit well. We don’t have enough data to show what the back wall of something like this looks like. Satellites, given how high they are–I showed it earlier, the Kilimanjaro thing. Kilimanjaro is a nice shape for that. The moment you have overhangs, it’s not so good. Cities, you can’t do that at all.

That’s why I specifically mentioned Kazakhstan. There’s no way to get into Kazakhstan. Won’t happen for years and years. Too much geopolitical stuff in the way. But people might want to fly there. Flight Sim is free. Open skies. For that I’d go with satellite data. Sometimes you just need to find the right satellite. They fly in these weird patterns.

GamesBeat: So the default is satellite data, but then you fill that in with more detail.

Neumann: Exactly. The satellite data is not strong 3D data. There’s some 3D data, but it’s not very good.

Advertisement

Stone: You talked about consumer need and desire. One of the more ambitious things about 2024 is the addition of all these activities and the career mode. How did you find a balance between gamification and the grounded authenticity that Flight Sim is known for?

Neumann: It’s difficult. Honestly, I’ll wait for the judgment of the court, of the people. The people will be right. All we can do is engage. For example, I lived in Seattle. The Coast Guard is close. They called me and said, “Hey, Flight Sim is awesome. Can we deploy it in our stations? We want people to train up.” Why not? “If you want to do Coast Guard missions, let us know.” We took them on. Why not? Free help. Same with the big center for firefighting in Europe. They helped us.

Some other things we probably didn’t spend as much time on, like VIP. The Asobo guys know a pilot, a VIP pilot. All he does is fly business jets around for famous rich people. But is that all that different from flying alone? Not really. We spent most of our time on the very on-the-ground things. Agricultural aviation, those types of things. Did we get it perfect? I don’t know. We’ll see. And we’ll make it better. If we get feedback and see that we didn’t get it quite right, that’s okay. We’re here to learn.

Advertisement
Microsoft Flight Simulator 2024 simulates the African savannah because it can.

GamesBeat: The more you put things on the ground, is it conceivable that you could get help from Ubisoft’s developers making Paris, or Call of Duty making Washington D.C.?

Neumann: It goes the other way. I get called a fair bit now that we’ve merged with Activision Blizzard. I get a lot of phone calls from people who want to sim New York. We do have a brand new model of New York, it turns out. But a lot of games don’t really want real-world scale. They want a spatially optimized version. Otherwise it’s too big. It’s boring. You don’t want that. That’s where this particular team–we said, “Here’s our alpha model of New York.” Then they can take this section and that section and glue it all together. It’s up to them. We can give them the data.

In the countryside, we’re way ahead of everyone else. We have so many connections now. I used to do this by myself. The first two years it was just me doing this. That wasn’t a good solution. Now there are four guys doing nothing but talking to governments, geographical institutes, drilling companies. Anybody who does anything around the world, we try to get their data and fit it in there. It’s getting better all the time. It’s not perfect. There are still areas that are almost terra incognita, where we barely have something reasonable. All we can do is try hard. Go to Zimbabwe and try to get good data. But that’s the reality.

We need Jordan. If you saw Dune, that’s all Jordan. It’s pretty nice. I had a lot of fun working on that. There are awesome rock formations in that area. You can’t get that from the satellite data at all. It just looks like a pancake. I’m determined to fly planes over Jordan. I talked to Patrice Vermette, the creative director of the Dune movie. I met him in Budapest. We filmed a little vignette there, doing this whole thing with the ornithopter. I told him the story. I want to get this stuff. He says, “Okay, I know all the people in the Jordanian government.” Now I’m writing the emails. “Hey, I’m Jorg, I work on Flight Sim. I’d like to get this and this.” You have to engage with people. It’s just the way it is.

Advertisement

Stone: 2020, in addition to the PC, was released for Xbox Series X|S. Having taken that experience–I play Flight Sim with an Xbox controller. What is it like taking that and improving on that experience for 2024?

Neumann: First of all, I’d say Flight Sim was pretty good on Xbox. The key binding–you need to talk to David about this. The key binding thing, there are so many functions. Getting that right, it’s like sign language. It’s a totally different alphabet. He’s the spearhead on that. I use it, but I’m not the designer. He is. He knows everything. I’d encourage you to talk to him.

GamesBeat: Where did you get the confidence to conclude that all cloud computing would work this time, versus part local and part cloud like last time?

The flight sim models the cracks in the runway of the Belgrade airport.
The flight sim models the cracks in the runway of the Belgrade airport.

Neumann: Sometimes you just have to believe. Even when 2020 came out–I said, “Hey, I found two petabytes of data.” People said, “Cool, and…?” “We’re gonna stream all that!” “Come again?” 2016, 2017, when we started, the internet ping time in, say, western Australia was horrible. There was no way you could stream this game. Then more and more data centers were built. As we were working on this product, they built data centers all over the place. That enabled the product. The infrastructure of the world caught up, and thank God they did all that. Otherwise, I don’t know.

That just continued. You’ve seen the data. Everyone reads tech news. You see the explosion of where this is all going. We got lucky. Sometimes it takes that.

Advertisement

GamesBeat: I remember the San Mateo Bridge by my house. Half of it popped in, and then the other half. Oh, there’s the rest of it, from the cloud.

Neumann: I believe in technology making human life better. I’ve grown up like this. I believe we’ll keep investing in making that better. It makes elements of our lives better. It has some downsides, no doubt. But I think this product will exist because of an overall need. I do think we’re a helpful product. I see what people are trying to do with it. Greenpeace uses this. Amnesty International uses this. Local governments trying to figure out how to make a train line disturb as few people as possible, they use our stuff. It has real world applications. In the right hands, it’s good for people. That makes me more proud than anything, that we’ve done something beyond just another game. It’s transcended, just a little bit.

I grew up with atlases and globes. I have daughters, and when I ask them what’s the biggest country in Africa, they say, “Africa?” They don’t even understand what the countries are. The hell? You’re smart. You’re well-educated. What the heck? But their curiosity about the world, the geography of the world–I’m from Germany. Geography is mandated. You spend 12 years in school with geography. You have to learn it. In America my daughters don’t ever need to study it. They know nothing about the planet. It’s weird to me.

Stone: The X factor for flight sim is that attention to detail, to authenticity. When you’re literally working with petabytes of data, how do you sift through that and focus on what matters for the experience? How do you narrow down and hone in and optimize the Flight Sim experience with everything that you work with?

Advertisement
I suppose this is possible in the metaverse.

Neumann: The petabytes of data mostly sit on the ground. You can take that as far as you want. People say, “Did you get everything you want?” No. The cut list is much longer than the stuff we actually used. I wanted to do butterfly collection, for what it’s worth. I wanted to have insects. You could have a net and go make a collection. Or collect seashells.

GamesBeat: I love the sheep herding with the helicopter.

Neumann: That’s awesome. Isn’t that cool? Now, is it critical for flight simming? No. It’s critical for the authenticity of the planet, the emotional connection you have with it. The way our brains work, it’s in layers. I don’t know where you’re from. I’m from Germany, though. Say you show me the Rock of Gibraltar. Do I have an emotional connection to it? Not really. I know about it. But say I visit it someday, and I find out that the rock is full of monkeys. If the monkeys aren’t in the Rock, it diminishes the emotional reaction you have. That’s why I would say it’s important to have monkeys. That’s what the Rock is.

That’s how I typically feel. Is it actually relevant to what we do? The butterfly collection, is that important to anything? It really isn’t, until you make it important. Then it’s very important to the people who like collecting butterflies. I get up in the morning and read what other people have written. I try to understand the underlying thoughts behind it. Then I try to tackle that.

GamesBeat: Do you believe in the butterfly effect?

Advertisement

Right now, what’s more important: butterflies, or turbulence over the Atlantic? Turbulence over the Atlantic, no doubt. It affects flight. The AITA, the organization that collects that stuff, the pilots call back and say, “I just ran into turbulence.” They have a database and a map. They said, “Jorg, you can have our map. Do you want to put it into Flight Sim?” I do. We just didn’t get to it. But then you’ll get, in real time, the right rumbles at the right altitude over the Atlantic. Is that critical to flight simming? No, but it’s real. If you want to be a trans-Atlantic pilot, you’ll run into this. People will appreciate it.

Maybe we’re already into diminishing returns, but I don’t think about it that way. I think we’ll keep trying to make this as real as it gets.

Disclosure: Microsoft paid my way to the Grand Canyon. Our coverage remains objective.


Source link
Continue Reading

Technology

Health insurance startup Alan reaches $4.5B valuation with new $193M funding round

Published

on

Health insurance startup Alan reaches $4.5B valuation with new $193M funding round

Alan, the French insurance unicorn, just signed a multi-faceted deal with Belfius, one the largest banks in Belgium, that includes a distribution partnership along with a significant financial investment in the startup.

Belfius is leading Alan’s Series F funding round of €173 million (around $193 million at current exchange rates). Some of Alan’s existing investors are participating once again, namely Ontario Teachers’ Pension Plan (via Teachers’ Venture Growth), Temasek, Coatue and Lakestar.

If you aren’t familiar with Alan, the company originally started with a health insurance product that complements the national healthcare system in France. French companies must provide health insurance to all their employees when they join.

Alan has optimized its core product as much as possible so that its user experience is much better than a legacy insurance provider’s. For instance, Alan has automated many parts of the claim management system. In some cases, you get a reimbursement on your bank account just a minute after leaving the doctor’s office.

Advertisement

Over time, the company added other health-related services, such as the ability to chat with doctors, order prescription glasses, and use preventive care content on mental health, back pain and more via its mobile app. More recently, the company has turned to AI to increase its productivity.

Earlier this year, Alan shared some metrics about the company’s performance. The company had said that over 500,000 people were covered by Alan’s insurance products, and it could reach profitability without raising another funding round.

But Alan said the partnership with Belfius was a good opportunity to grow its customer base in Belgium — the bank will offer the startup’s health insurance products to its own corporate and institutional clients, which represent millions of employees.

“This privileged partnership with Belfius, whose transformation over the past decade has been truly inspiring, opens the door to a new era for Alan in Belgium. Belfius’ investment will allow us to accelerate our development and expand our capacity to offer cutting-edge, accessible health products and services to a wide audience,” Alan’s co-founder and CEO, Jean-Charles Samuelian-Werve, said in a statement.

Advertisement

Since February, Alan has added another 150,000 customers, including at the Prime Minister’s office in France. It expects its annual recurring revenue to reach €450 million (around $500 million) this year.

However, Alan isn’t a typical software-as-a-service company, and most of its revenue is set aside to fulfill insurance claims. Still, one thing is for sure — the company’s growth doesn’t seem to be slowing down.

Source link

Continue Reading

Technology

Elon Musk is navigating Brazil’s X ban — and flirting with its far right

Published

on

Elon Musk is navigating Brazil’s X ban — and flirting with its far right

For more than two weeks, Brazilians have been without access to X. Brazil’s Supreme Court blocked the platform after Elon Musk failed to comply with court rulings. As X evades the ban and Musk’s companies work slowly toward a resolution, the real concern for many isn’t just the absence of social media. It’s Musk’s power play over the government as he backs Brazil’s far right.

X was banned on August 30th after months of back-and-forth between Musk and Supreme Court Justice Alexandre de Moraes. The conflict began in April when Musk publicized government requests for information and then removed all restrictions imposed on X profiles by Brazilian court orders. Moraes responded by including Musk in an investigation over organized political disinformation and subpoenaing X’s Brazilian legal representative. Musk abruptly shuttered its local operations, leading Moraes to ban it for violating local laws.

Since then, negotiations between both sides have proceeded gradually. The Supreme Court announced a transfer of R$ 18.3 million from X and Starlink to the national treasury, indirectly paying a fine for not removing content. Moraes subsequently ordered the unblocking of both companies’ bank accounts. Musk has reportedly met with Vanessa Souza, a Brazilian specialist in cyber law, and he’s appointed a pair of attorneys to represent X in Brazil — leading Moraes to ask if X has reopened operations, which could eventually clear the way for a lifted ban.

But Musk’s public response has largely been confrontational. In the past couple of weeks, he has criticized the Brazilian Supreme Court’s decision as well as the president, claiming the ban violates free speech and sets a dangerous precedent. He’s rallied public support, primarily from far-right influencers and politicians.

Advertisement

And this week, some Brazilians briefly got access to X again. According to the Brazilian Association of Internet and Telecommunications Providers (ABRINT), X made a “significant” update early on September 18th, changing its design to use IP addresses linked to Cloudflare and routing around service providers’ blocks. ABRINT said the update put providers in a “delicate situation” while regulators attempted to get it blocked again. X officially called the ban “inadvertent and temporary,” but Moraes levied extra fines against it for what he dubbed “willful, illegal and persistent” evasion, citing a Musk tweet that seemed to celebrate the move.

Musk’s defiance is part of a long flirtation with Brazil’s currently out-of-power far right. “He is not just an influencer of the far right, he is an activist,” says Camila Rocha, a researcher at the Brazilian Center of Analysis and Planning (CEBRAP) and a political scientist. “The collaboration, the harmony between what is happening in Brazil and what is on the networks, is huge.” Whatever happens next in the X–Brazil saga, Musk could claim it’s a win.

A court is potentially clearing the way for X to come back; in the short term, it’s evaded its ban

Luiz Augusto D’Urso, a lawyer specializing in digital law, describes X’s closing of its Brazilian office as a dramatic gesture that forced Moraes’ hand. “It’s important to note that the Supreme Court’s initial ruling was never to block the platform. Things escalated,” D’Urso says. “The last decision before the ban required the platform to appoint a legal representative in Brazil, which is a legal obligation. When Musk refused, the result was suspension.”

Advertisement

Musk wasted no time turning the issue into a political spectacle. On August 29th, he referred to Justice Moraes as “the tyrant, @Alexandre, dictator of Brazil” in a post about Starlink’s assets being frozen, saying “[Brazilian President] Lula is his lapdog.” Another post calls Moraes “a declared criminal of the worst kind, disguised as a judge.”

Brazil’s right wing has seized the moment, too, framing the X ban as a fight for freedom of speech. Musk has interacted with supporters of the far right using emoji of the Brazilian flag (in context, a symbol of the movement). He supported demonstrations on September 7th, or Brazilian Independence Day, by sharing Jair Bolsonaro-supporting profiles and calling on users to participate, and he posted a photo of himself alongside former President Bolsonaro.

Rocha notes that Musk’s support for Brazil’s far right has been obvious for years. The billionaire has become popular in parts of Brazil thanks to his Starlink satellite internet service, which operates across the country and particularly in the Amazon. Starlink also provides services to the Brazilian Armed Forces. 

This activism tallies with his support of right-wing politics globally, including elsewhere in Latin America. Musk has an ongoing friendly relationship with Argentinian President Javier Milei, with whom he’s agreed on “the importance of technological development for the progress of humanity.” Milei has supported Musk throughout the conflict with the Brazilian Supreme Court, accusing it of wanting to “prohibit the space where citizens exchange ideas freely.”

Advertisement

Musk has even (perhaps jokingly) suggested that “we’ll coup whoever we want” in Latin America, responding to an accusation that the US government intervened against Bolivian President Evo Morales to secure lithium supplies for Tesla.

In Brazil, Musk — who despite his public commitment to free speech has blocked content at the behest of conservative governments — stands to gain by resolutely supporting Bolsonaro’s far right. “He presents himself as a defender of freedom, but he is exclusively business-oriented and has no commitment to democracy,” says Sérgio Soares Braga, a researcher at the National Institute of Science and Technology in Digital Democracy (INCT.DD). The far right offers a clearer path to the “unregulated capitalism” Musk favors.

“He presents himself as a defender of freedom, but he is exclusively business-oriented”

But Musk’s resistance is also a direct fight over how and whether American tech (and particularly internet) companies can be regulated abroad. An open letter sent on September 17th, as translated by The Verge, called the ban part of an “evolving global conflict between digital corporations and those seeking to build a democratic, people-centered digital landscape focused on social and economic development.” It accused Musk of sabotaging “and operate against the public sector’s ability to create and maintain an independent digital agenda based on local values, needs and aspirations.” The letter was signed by more than 50 intellectuals, including economist Mariana Mazzucato and author Cory Doctorow.

Advertisement

“Musk wants to control a wide array of industries, from big tech to electric vehicles, which grants him significant economic power and geopolitical influence,” says Braga. But in Brazil, Braga argues, he’s overstepped his bounds. “He can’t abuse this power to interfere in a nation’s sovereignty.”

Musk is making sacrifices by keeping X offline. Competing social networks have reaped gains from the block — Bluesky, for instance, says it’s gained millions of new users largely from Brazil. “There are growing suspicions that Musk has ulterior motives,” says Rocha. “Why would he let X remain offline for so long? What does he stand to gain?”

One potential answer is that Musk doesn’t have much left to lose by shrinking Twitter’s base in Brazil. The platform has already reportedly lost at least 71 percent of its value since Musk acquired it, and it shows little sign of recovery. (By contrast, Musk’s Starlink eventually caved to demands that it block X, though it’s said it’s still pursuing legal action.) It’s more important to take a stand against Brazil’s policies — not out of idealism, but a pragmatic bid for more control.

But for D’Urso, Musk’s endgame is clear: he benefits either way. “If he backs down, he portrays himself as the man who stood up to the Supreme Court. If X remains banned, he becomes a martyr, claiming persecution. It’s a win-win situation for him.”

Advertisement

Source link

Continue Reading

Technology

I’m not a horror fan, but I can’t wait to be paralyzed with fear after watching the Netflix trailer for Don’t Move

Published

on

A girl stands in the middle of a forest wearing running gear

Have you ever had a nightmare where you’re being chased by a killer and can’t move no matter how hard you try? Well, this horror story comes true in the trailer for new Netflix movie Don’t Move.

With legendary Evil Dead director Sam Raimi producing the upcoming thriller, there’s a high possibility that Don’t Move could become one of the best horror movies. The best streaming service released the new trailer (see below) as part of Netflix’s Geeked Week and my horror-phobic self is ready to be frozen in fear just this once.

Source link

Advertisement

Continue Reading

Trending

Copyright © 2017 Zox News Theme. Theme by MVP Themes, powered by WordPress.