Connect with us
DAPA Banner

Tech

Andrew Ng: Unbiggen AI – IEEE Spectrum

Published

on

Andrew Ng has serious street cred in artificial intelligence. He pioneered the use of graphics processing units (GPUs) to train deep learning models in the late 2000s with his students at Stanford University, cofounded Google Brain in 2011, and then served for three years as chief scientist for Baidu, where he helped build the Chinese tech giant’s AI group. So when he says he has identified the next big shift in artificial intelligence, people listen. And that’s what he told IEEE Spectrum in an exclusive Q&A.


Ng’s current efforts are focused on his company
Landing AI, which built a platform called LandingLens to help manufacturers improve visual inspection with computer vision. He has also become something of an evangelist for what he calls the data-centric AI movement, which he says can yield “small data” solutions to big issues in AI, including model efficiency, accuracy, and bias.


Andrew Ng
on…

The great advances in deep learning over the past decade or so have been powered by ever-bigger models crunching ever-bigger amounts of data. Some people argue that that’s an unsustainable trajectory. Do you agree that it can’t go on that way?

Advertisement

Andrew Ng: This is a big question. We’ve seen foundation models in NLP [natural language processing]. I’m excited about NLP models getting even bigger, and also about the potential of building foundation models in computer vision. I think there’s lots of signal to still be exploited in video: We have not been able to build foundation models yet for video because of compute bandwidth and the cost of processing video, as opposed to tokenized text. So I think that this engine of scaling up deep learning algorithms, which has been running for something like 15 years now, still has steam in it. Having said that, it only applies to certain problems, and there’s a set of other problems that need small data solutions.

When you say you want a foundation model for computer vision, what do you mean by that?

Ng: This is a term coined by Percy Liang and some of my friends at Stanford to refer to very large models, trained on very large data sets, that can be tuned for specific applications. For example, GPT-3 is an example of a foundation model [for NLP]. Foundation models offer a lot of promise as a new paradigm in developing machine learning applications, but also challenges in terms of making sure that they’re reasonably fair and free from bias, especially if many of us will be building on top of them.

What needs to happen for someone to build a foundation model for video?

Advertisement

Ng: I think there is a scalability problem. The compute power needed to process the large volume of images for video is significant, and I think that’s why foundation models have arisen first in NLP. Many researchers are working on this, and I think we’re seeing early signs of such models being developed in computer vision. But I’m confident that if a semiconductor maker gave us 10 times more processor power, we could easily find 10 times more video to build such models for vision.

Having said that, a lot of what’s happened over the past decade is that deep learning has happened in consumer-facing companies that have large user bases, sometimes billions of users, and therefore very large data sets. While that paradigm of machine learning has driven a lot of economic value in consumer software, I find that that recipe of scale doesn’t work for other industries.

Back to top

It’s funny to hear you say that, because your early work was at a consumer-facing company with millions of users.

Advertisement

Ng: Over a decade ago, when I proposed starting the Google Brain project to use Google’s compute infrastructure to build very large neural networks, it was a controversial step. One very senior person pulled me aside and warned me that starting Google Brain would be bad for my career. I think he felt that the action couldn’t just be in scaling up, and that I should instead focus on architecture innovation.

“In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.”
—Andrew Ng, CEO & Founder, Landing AI

I remember when my students and I published the first
NeurIPS workshop paper advocating using CUDA, a platform for processing on GPUs, for deep learning—a different senior person in AI sat me down and said, “CUDA is really complicated to program. As a programming paradigm, this seems like too much work.” I did manage to convince him; the other person I did not convince.

I expect they’re both convinced now.

Advertisement

Ng: I think so, yes.

Over the past year as I’ve been speaking to people about the data-centric AI movement, I’ve been getting flashbacks to when I was speaking to people about deep learning and scalability 10 or 15 years ago. In the past year, I’ve been getting the same mix of “there’s nothing new here” and “this seems like the wrong direction.”

Back to top

How do you define data-centric AI, and why do you consider it a movement?

Advertisement

Ng: Data-centric AI is the discipline of systematically engineering the data needed to successfully build an AI system. For an AI system, you have to implement some algorithm, say a neural network, in code and then train it on your data set. The dominant paradigm over the last decade was to download the data set while you focus on improving the code. Thanks to that paradigm, over the last decade deep learning networks have improved significantly, to the point where for a lot of applications the code—the neural network architecture—is basically a solved problem. So for many practical applications, it’s now more productive to hold the neural network architecture fixed, and instead find ways to improve the data.

When I started speaking about this, there were many practitioners who, completely appropriately, raised their hands and said, “Yes, we’ve been doing this for 20 years.” This is the time to take the things that some individuals have been doing intuitively and make it a systematic engineering discipline.

The data-centric AI movement is much bigger than one company or group of researchers. My collaborators and I organized a
data-centric AI workshop at NeurIPS, and I was really delighted at the number of authors and presenters that showed up.

You often talk about companies or institutions that have only a small amount of data to work with. How can data-centric AI help them?

Advertisement

Ng: You hear a lot about vision systems built with millions of images—I once built a face recognition system using 350 million images. Architectures built for hundreds of millions of images don’t work with only 50 images. But it turns out, if you have 50 really good examples, you can build something valuable, like a defect-inspection system. In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.

When you talk about training a model with just 50 images, does that really mean you’re taking an existing model that was trained on a very large data set and fine-tuning it? Or do you mean a brand new model that’s designed to learn only from that small data set?

Ng: Let me describe what Landing AI does. When doing visual inspection for manufacturers, we often use our own flavor of RetinaNet. It is a pretrained model. Having said that, the pretraining is a small piece of the puzzle. What’s a bigger piece of the puzzle is providing tools that enable the manufacturer to pick the right set of images [to use for fine-tuning] and label them in a consistent way. There’s a very practical problem we’ve seen spanning vision, NLP, and speech, where even human annotators don’t agree on the appropriate label. For big data applications, the common response has been: If the data is noisy, let’s just get a lot of data and the algorithm will average over it. But if you can develop tools that flag where the data’s inconsistent and give you a very targeted way to improve the consistency of the data, that turns out to be a more efficient way to get a high-performing system.

“Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.”
—Andrew Ng

Advertisement

For example, if you have 10,000 images where 30 images are of one class, and those 30 images are labeled inconsistently, one of the things we do is build tools to draw your attention to the subset of data that’s inconsistent. So you can very quickly relabel those images to be more consistent, and this leads to improvement in performance.

Could this focus on high-quality data help with bias in data sets? If you’re able to curate the data more before training?

Ng: Very much so. Many researchers have pointed out that biased data is one factor among many leading to biased systems. There have been many thoughtful efforts to engineer the data. At the NeurIPS workshop, Olga Russakovsky gave a really nice talk on this. At the main NeurIPS conference, I also really enjoyed Mary Gray’s presentation, which touched on how data-centric AI is one piece of the solution, but not the entire solution. New tools like Datasheets for Datasets also seem like an important piece of the puzzle.

One of the powerful tools that data-centric AI gives us is the ability to engineer a subset of the data. Imagine training a machine-learning system and finding that its performance is okay for most of the data set, but its performance is biased for just a subset of the data. If you try to change the whole neural network architecture to improve the performance on just that subset, it’s quite difficult. But if you can engineer a subset of the data you can address the problem in a much more targeted way.

Advertisement

When you talk about engineering the data, what do you mean exactly?

Ng: In AI, data cleaning is important, but the way the data has been cleaned has often been in very manual ways. In computer vision, someone may visualize images through a Jupyter notebook and maybe spot the problem, and maybe fix it. But I’m excited about tools that allow you to have a very large data set, tools that draw your attention quickly and efficiently to the subset of data where, say, the labels are noisy. Or to quickly bring your attention to the one class among 100 classes where it would benefit you to collect more data. Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.

For example, I once figured out that a speech-recognition system was performing poorly when there was car noise in the background. Knowing that allowed me to collect more data with car noise in the background, rather than trying to collect more data for everything, which would have been expensive and slow.

Back to top

Advertisement

What about using synthetic data, is that often a good solution?

Ng: I think synthetic data is an important tool in the tool chest of data-centric AI. At the NeurIPS workshop, Anima Anandkumar gave a great talk that touched on synthetic data. I think there are important uses of synthetic data that go beyond just being a preprocessing step for increasing the data set for a learning algorithm. I’d love to see more tools to let developers use synthetic data generation as part of the closed loop of iterative machine learning development.

Do you mean that synthetic data would allow you to try the model on more data sets?

Ng: Not really. Here’s an example. Let’s say you’re trying to detect defects in a smartphone casing. There are many different types of defects on smartphones. It could be a scratch, a dent, pit marks, discoloration of the material, other types of blemishes. If you train the model and then find through error analysis that it’s doing well overall but it’s performing poorly on pit marks, then synthetic data generation allows you to address the problem in a more targeted way. You could generate more data just for the pit-mark category.

Advertisement

“In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models.”
—Andrew Ng

Synthetic data generation is a very powerful tool, but there are many simpler tools that I will often try first. Such as data augmentation, improving labeling consistency, or just asking a factory to collect more data.

Back to top

To make these issues more concrete, can you walk me through an example? When a company approaches Landing AI and says it has a problem with visual inspection, how do you onboard them and work toward deployment?

Advertisement

Ng: When a customer approaches us we usually have a conversation about their inspection problem and look at a few images to verify that the problem is feasible with computer vision. Assuming it is, we ask them to upload the data to the LandingLens platform. We often advise them on the methodology of data-centric AI and help them label the data.

One of the foci of Landing AI is to empower manufacturing companies to do the machine learning work themselves. A lot of our work is making sure the software is fast and easy to use. Through the iterative process of machine learning development, we advise customers on things like how to train models on the platform, when and how to improve the labeling of data so the performance of the model improves. Our training and software supports them all the way through deploying the trained model to an edge device in the factory.

How do you deal with changing needs? If products change or lighting conditions change in the factory, can the model keep up?

Ng: It varies by manufacturer. There is data drift in many contexts. But there are some manufacturers that have been running the same manufacturing line for 20 years now with few changes, so they don’t expect changes in the next five years. Those stable environments make things easier. For other manufacturers, we provide tools to flag when there’s a significant data-drift issue. I find it really important to empower manufacturing customers to correct data, retrain, and update the model. Because if something changes and it’s 3 a.m. in the United States, I want them to be able to adapt their learning algorithm right away to maintain operations.

Advertisement

In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models. The challenge is, how do you do that without Landing AI having to hire 10,000 machine learning specialists?

So you’re saying that to make it scale, you have to empower customers to do a lot of the training and other work.

Ng: Yes, exactly! This is an industry-wide problem in AI, not just in manufacturing. Look at health care. Every hospital has its own slightly different format for electronic health records. How can every hospital train its own custom AI model? Expecting every hospital’s IT personnel to invent new neural-network architectures is unrealistic. The only way out of this dilemma is to build tools that empower the customers to build their own models by giving them tools to engineer the data and express their domain knowledge. That’s what Landing AI is executing in computer vision, and the field of AI needs other teams to execute this in other domains.

Is there anything else you think it’s important for people to understand about the work you’re doing or the data-centric AI movement?

Advertisement

Ng: In the last decade, the biggest shift in AI was a shift to deep learning. I think it’s quite possible that in this decade the biggest shift will be to data-centric AI. With the maturity of today’s neural network architectures, I think for a lot of the practical applications the bottleneck will be whether we can efficiently get the data we need to develop systems that work well. The data-centric AI movement has tremendous energy and momentum across the whole community. I hope more researchers and developers will jump in and work on it.

Back to top

This article appears in the April 2022 print issue as “Andrew Ng, AI Minimalist.”

From Your Site Articles

Advertisement

Related Articles Around the Web

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Hide My Email is great for battling surveillance capitalism, not the FBI

Published

on

Apple’s Hide My Email service lets users generate anonymous, randomized email addresses to help avoid spam, but it isn’t going to protect you from subpoenas — especially if you threaten the FBI directly.

The camera plateau of the iPhone 17 Pro Max in blue
Apple encryption and services can only protect you from so much

End-to-end encryption ensures that your data remains yours on-device and in transit. This applies to things like iMessage and Apple Health, especially when Advanced Data Protection is turned on.
However, that doesn’t mean Apple won’t comply with a subpoena when it is presented with one that fits the scope of the request. Hide My Email might help protect users from spam, but if you’re emailing threats to the FBI director’s girlfriend, there’s nothing to protect you.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Avatar Legends: The Fighting Game comes out in July and it looks pretty slick

Published

on

Avatar fans, this one’s been a long time coming, and it finally has a release date. Announced in a new trailer at the Evo Awards on Saturday, Avatar Legends: The Fighting Game officially drops on July 2, 2026.

The game is coming to pretty much everything, including PS5, Xbox Series X/S, Switch (including Switch 2), and PC. It’ll launch with 12 playable characters, alongside multiple modes like Story, Arcade, Training, and full online multiplayer with ranked and casual play. As for what kind of game it is, think classic 2D fighter… but with bending.

Why does Avatar Legends look so promising?

Avatar Legends is a 1v1 fighting game built around elemental combat, letting players control fan-favorite characters from both Avatar: The Last Airbender and The Legend of Korra. It features hand-drawn 2D animation, which honestly looks straight out of the show, and a unique “Flow System” that focuses on movement, positioning, and expressive combat rather than just button mashing. There’s also a support character system, meaning fights aren’t just about your main pick. You can even tweak your playstyle with assist abilities and special moves.

However, the best part about this game is that it’s not just coasting on nostalgia. The devs are clearly targeting both casual players and fighting game enthusiasts, with features like rollback netcode and full cross-play, which are huge for competitive longevity. Add to that an original story mode and a planned roster expansion via DLC, and it feels like this could stick around for a while.

Advertisement

So… is this the Avatar game we’ve been waiting for?

Avatar Legends looks like it actually gets what makes the series click: fluid movement, expressive combat, and that signature bending chaos. Add in hand-drawn visuals, a solid 1v1 fighting system, and mechanics like the Flow System and support assists, and it’s shaping up to be more than just another licensed fighter.

And that’s the big deal here. This isn’t trying to reinvent the genre. Instead, it’s trying to belong in it, while staying true to Avatar’s identity. If everything clicks, this could easily become the go-to fighter for fans… and maybe even pull in players who’ve never watched the show.

Source link

Advertisement
Continue Reading

Tech

Best 360 Cameras (2026): DJI, Insta360, GoPro

Published

on

Top 5 360 Cameras Compared

Honorable Mentions

Two Insta360 cameras long rectangular black devices on a beachside rock.

Photograph: Scott Gilbertson

Insta360 X4 for $340: I’d recommend skipping this one unless you can get it on sale for under $300. The X4 Air is (usually) cheaper, smaller, and more capable, though the X4 does have a larger screen and the battery life is better (though again, the video quality is not as good as the X4 Air). If you can find a killer deal under $300, the X4 is worth nabbing. Otherwise though, stick with the X4 Air.

Advertisement

Qoocam 3 Ultra for $539: It’s not widely available, and we have not had a chance to try one, but Kandao’s Qoocam 3 Ultra is another 8K 360 camera that looks promising, at least on paper. The f/1.6 aperture is especially interesting, as most of the rest of these are in the f/2 and up range. We’ll update this guide when we’ve had a chance to test a Qoocam.

360 Cameras to Avoid

Insta360 One RS: Insta360’s interchangeable-lens action-camera/360-camera hybrid was a novel idea that just didn’t seem to catch on. Now it’s a bit dated. The video footage isn’t as good as the other cameras in this guide, but you can swap the lens and have an action camera in a moment, which is the major selling point. Ultimately I’d say skip this, get the X4 Air and if you want to use it like a GoPro, just shoot in single lens mode.

GoPro Max: You’ll still run across GoPro’s original Max sometimes, but again, there are better options.

Advertisement

Insta360 One X3: Insta360’s older X3 is not worth buying at this point.

Insta360 One RS 1 360 Edition: Although I still like and use this camera, it appears to have been discontinued, and there’s no replacement in sight. The X5 delivers better video quality in a lighter, less fragile body, but I will miss those 1-inch sensors that managed to pull a lot of detail, even if the footage did top out at 6K. These are still available used, but at outrageous prices. You’re better off with the X5.

Advertisement

Frequently Asked Questions

There are two reasons you’d want a 360-degree camera. The first is to shoot virtual reality content, where the final viewing is done on a 360 screen, e.g., VR headsets and the like. So far this is mostly the province of professionals who are shooting on very expensive 360 rigs not covered in this guide, though there is a growing body of amateur creators as well. If this is what you want to do, go for the highest-resolution camera you can get. Either of our top two picks will work.

For most of us though, the main appeal of a 360 camera is to shoot everything around you and then edit or reframe to the part of the scene we want to focus on, or panning and tracking objects within the 360 footage, but with the result being a typical, rectangular video that then gets exported to the web. The video resolution and image quality will never match what you get from a high-end DSLR, but the DSLR might not be pointed at the right place, at the right time. The 360 camera doesn’t have to be pointed anywhere, it just has to be on.

This is the best use case for the cameras on this page, which primarily produce HD (1080p) or better video—but not 4K—when reframed. I expect to see 12K-capable consumer-level 360 cameras in the next year or two (which is what you need to reframe to 4K), but for now, these are the best cameras you can buy.

Advertisement

Whether you’re shooting virtual tours or your kid’s birthday, the basic premise of a 360 camera is the same. The fisheye lens (usually two very wide-angle lenses combined) captures the entire scene around you, ideally editing out the selfie stick if you’re using one. Once you’ve captured your 360-degree view, you can then edit or reframe that content down to something ready to upload to YouTube, TikTok, and other video-sharing sites.

Why Is High Resolution Important in 360 Cameras?

Camera makers have been pushing ever-higher video resolution for so long it feel like a gimmick in many cases, but not with 360 cameras. Because the camera is capturing a huge field of view, the canvas if you will, is very large. To get a conventional video from that footage you have to crop which zooms in on the image, meaning your 8K 360 shot becomes just under 2.7K when you reframe that footage.

How Does “Reframing” Work?

Advertisement

Reframing is the process of taking the huge, 360-degree view of the world that your camera capture and zooming in on just a part of it to tell your story. This makes the 360 footage fit traditional movie formats (like 16:9), but as noted above it means cropping your footage, so the higher resolution you start with the better your reframed video will look.

If you’re shooting for VR headsets or other immersive tools, then you don’t have to reframe anything.

I’ve been shooting with 360 cameras since Insta360 released the X2 back in 2020. Early 360 cameras were fun, but the video they produced wasn’t high enough resolution to fit with footage from other cameras, limiting their usefulness. Thankfully we’ve come a long way in the last five years. The 360 camera market has grown and the footage these cameras produce is good enough to mix seamless with your action camera and even your high end mirrorless camera footage.

To test 360 cameras I’ve broken the process down into different shooting scenarios, especially scenes with different lighting conditions, to see how each performs. No camera is perfect, so which one is right for you depends on what you’re shooting. I’ve paid special attention to the ease of use of each camera (360 cameras can be confusing for beginners), along with what kind of helpful extras each offers, HDR modes, and support for accessories.

Advertisement

The final element of the picture is the editing workflow and tools available for each camera. Since most people are shooting for social media, the raw 360 footage has to be edited before you post it anywhere. All the cameras above have software for mobile, Windows and macOS.

Power up with unlimited access to WIRED. Get best-in-class reporting and exclusive subscriber content that’s too important to ignore. Subscribe Today.

Source link

Advertisement
Continue Reading

Tech

Stop holding out hope, Liquid Glass will be mandatory in iOS 27

Published

on

The Liquid Glass design that rolled out with iOS 26 isn’t going anywhere, according to a recount of an Apple Developer workshop.

Close-up of a modern smartphone screen showing blue app icons, the front camera pill-shaped cutout, time 4:20, SOS and WiFi indicators, on a textured gray fabric surface
Developers will be required to use Liquid Glass once Xcode 27 debuts.

With the debut of iOS 26 at WWDC 2025, Apple made significant alterations to the look and feel of the iPhone operating system. The fairly straightforward flat design, used from iOS 7 to iOS 18, was replaced with a more rounded, translucent aesthetic dubbed “Liquid Glass.”
Six months after launch, the new design language remains as divisive and controversial as ever, with developers in particular lacking adjustment options for Liquid Glass. Still, that doesn’t mean Liquid Glass will be abandoned anytime soon, and Apple has seemingly even said so outright.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Save big with the INIU Spring Sale

Published

on

Spring is usually when plans start filling up again, from quick city breaks to longer outdoor trips, and it often highlights how quickly devices run out of power when you are away from a charger.

That is where INIU’s Spring Sale campaign becomes more compelling, with discounts applied across its portable charging range and the INIU Pocket Rocket P50 leading the offer, now reduced to £28.05 from £32.99 as its smallest and fastest everyday power solution.

A power bank built for moving around, not staying plugged in

The INIU Pocket Rocket P50 is designed around portability first, packing a 10,000mAh capacity into a compact form that is 45% more compact than standard models, making it small enough to slip into a pocket or lightweight travel bag.

Weighing around 160 grams, it is 63% lighter than the average power bank, which often feels bulky when you are already packing for a trip or commute, making it particularly useful for short trips, festivals, or long days out where extra weight quickly becomes noticeable alongside other essentials.

Advertisement

Charging performance is another key part of the appeal, with 45W fast charging allowing compatible devices to reach a significant percentage of battery in under half an hour.

In practical terms, the INIU Pocket Rocket P50 can fully charge your phone an average of two times. This gives you more time actually using them without worrying about conserving battery life, whether you are navigating, taking photos, or staying connected while travelling.

Advertisement

The INIU Pocket Rocket P50 is now under £30The INIU Pocket Rocket P50 is now under £30

The INIU Pocket Rocket P50 is now under £30

Advertisement

View Deal

Spring savings that go beyond a single product

The INIU Spring Sale campaign runs across both the official store and Amazon, covering a wide range of portable charging products rather than focusing on just one device.

Advertisement

Across the lineup, you can get up to 30% off, with additional tiered discounts applied automatically at checkout, including $5 off orders over $50, $10 off over $80, and $20 off over $100.

That structure makes it easier to pick up multiple essentials at once, whether you are adding extra cables, upgrading to higher-capacity power banks, or simply building a more reliable everyday carry setup.

The campaign also lines up closely with how people actually use these products, leaning into travel, outdoor plans, and day-to-day movement rather than desk-bound charging or fixed setups.

Timing plays a role here too, with the INIU Official Store promotion running from March 20 to April 20, 2026, while the Amazon deals are available for a shorter window from March 25 to March 31, 2026, giving you a clear window to take advantage of the savings.

Advertisement

Advertisement

Source link

Continue Reading

Tech

Why OpenAI really shut down Sora

Published

on

OpenAI’s decision last week to shut down Sora, its AI video-generation tool, just six months after releasing it to the public raised immediate suspicions. The app had invited users to upload their own faces — so was this some kind of elaborate data grab? According to a new WSJ investigation, the real explanation is considerably more boring: Sora was a money pit that nobody was using, and keeping it alive was costing OpenAI the AI race.

So what happened? After a splashy launch, Sora’s worldwide user count peaked at around a million and then collapsed to fewer than 500,000. Meanwhile, the app was burning through roughly a million dollars a day — not because people loved it, but because video generation is extraordinarily expensive to run. Every user who dropped themselves into a fantastical chase scene was drawing down a finite supply of AI chips.

While a whole team inside OpenAI was focused on making Sora work, Anthropic was quietly winning over the software engineers and enterprises that drive revenue. Claude Code, in particular, was eating OpenAI’s lunch.

So CEO Sam Altman made the call: kill Sora, free up compute, and refocus. If you want to understand just how sudden this was, consider what happened to Disney, per the WSJ: the entertainment giant had committed $1 billion to the partnership, yet found out Sora was being shut down less than an hour before the public. The deal died with it.

Advertisement

Source link

Continue Reading

Tech

YouTube CEO says the best YouTubers will ‘never leave their home’

Published

on

YouTube CEO Neal Mohan recently insisted that he isn’t worried about Netflix and other streaming services luring away the service’s most popular creators.

Mohan’s comments came during a long interview with The New York Times series The Interview — which, as Mohan noted, streams on YouTube. Indeed, he seemed to play the magnanimous winner for much of the conversation; when asked about Oscar host Conan O’Brien’s poking fun at YouTube, Mohan simply replied that O’Brien is “very funny” and that his “Team Coco channel does really well on YouTube.”

As for popular podcasts like “The Breakfast Club” and “My Favorite Murder” moving to Netflix, Mohan said it’s “flattering” that competitors “see us as the center of culture.” But he said that when he speaks to popular YouTubers, they tell him that “no matter what they look to do, they understand that YouTube is their home.”

“I have not come across YouTubers that have completely yanked their content off YouTube,” Mohan said. He added that when YouTubers negotiate with other platforms, those streamers will always “acquiesce to what our YouTubers ultimately know is the right decision for them in the long term, which is to never leave their home.”

Advertisement

Source link

Continue Reading

Tech

Why this week’s moon mission is so special for Jeremy Hansen

Published

on

NASA is engaged in the final preparations for the much-anticipated Artemis II mission that will send astronauts toward the moon for the first time in more than five decades.

The space agency is targeting 6:24 p.m. ET on Wednesday, April 1, for the launch from the Kennedy Space Center in Florida.

The four crew members — NASA astronauts Reid Wiseman, Victor Glover, and Christina Koch, together with CSA (Canadian Space Agency) astronaut Jeremy Hansen — will travel aboard an Orion spacecraft launched by NASA’s formidable SLS (Space Launch System) rocket.

After a couple of days in low-Earth orbit checking the spacecraft’s systems, the crew will send the Orion on course for a rendezvous with our nearest neighbor. The 10-day voyage won’t touch down on the moon but instead fly around it before returning home.

Advertisement

The mission is of course super special for every single one of those crew members, but for Hansen it comes with added personal impact as the flight will mark his very first time in space.

While Wiseman, Glover, and Koch each flew to the International Space Station (ISS) on their first orbital experience, Hansen will be traveling several hundred thousand miles further from Earth for his debut space ride.

Hansen will also become the first non-American, and first Canadian, to travel to the moon, a historic achievement that will cement his place in history and make him a national hero.

“I just want Canadians to feel that pride,” Hansen told CBC when he was announced as one of the Artemis II crew members in 2023. “I just want Canadians to realize, hey, we are up to big things here in Canada and can accomplish the seemingly impossible if we believe in ourselves.”

Advertisement

Artemis II is also a groundbreaking mission for Glover and Koch, who are about to become the first Black person and the first woman to travel to the moon — major milestones in their own right.

With only days to go before the targeted launch date, the four crew members are now in quarantine, poring over the flight plan and making sure they’re all set for the mission of a lifetime.

Want to know more about the mission? Then watch NASA’s video showing exactly how it expects the flight to unfold.

Advertisement

Source link

Continue Reading

Tech

I just saw United Airlines big plans for the future and, yes, it wants to fly like Apple

Published

on

It sounds absurd: an airline trying to channel Apple. Can an airline fly as high and smoothly as the tech icon?

After a few days with United Airlines — testing Starlink in the sky and previewing its next-gen ‘Elevate’ cabins — the comparison stopped feeling like an impossible stretch and more like a strategy.

Source link

Continue Reading

Tech

The Pixel 10a doesn’t have a camera bump, and it’s great

Published

on

For years now, smartphone makers have made the camera bump on devices bigger in order to chase camera improvements. Even if that kind of design makes cameras better, at times it creates usability issues. With the Pixel 10a, Google took a new approach of entirely removing the camera bump and making a flat phone that lies completely on surfaces.

While this is a delightful change in the world of big camera bumps, Google hasn’t otherwise made major design changes with its newest budget smartphone. The Pixel 9a looked mostly the same, with a very small camera bump.

I have the plain old black unit, but Google offers the phone in Lavender (a mix of bright blue and purple), Berry (coral), and Fog (a gray-green tone) colors.

Look! No camera bump Image Credits: Ivan Mehta

The screen size of 6.3 inches is the same as last year’s device, but the display is now brighter at 3,000 nits. Google is using the Actua display series of screens that it used with the other Pixel 10 devices to make it more usable in bright conditions. The display is capable of reaching a 120Hz refresh rate, but the unit ships with it set to 60Hz, so you will need to manually change that through the phone’s settings.

Build and specification-wise, the Pixel 10a goes toe-to-toe with the Pixel 10, with a few differences. For instance, the Pixel 10 has Corning Gorilla Glass Victus 2 on the front and the back, while the cheaper 10a has a plastic back and Corning Gorilla Glass 7i protection on the front. The budget device also has a bigger battery of 5,100 mAh, as compared to 4,970 mAh on the base Pixel 10. The Pixel 10 Pro XL has a battery of 5,200 mAh.

Advertisement

There are only small differences between the Pixel 9a, the Pixel 10a, and the Pixel 10, most of them having to do with performance and compute power. The obvious hardware difference is that the budget phones use the Google Tensor G4 chip, as compared to the Tensor G5 in the Pixel 10. The Pixel 10 charges at 30W through USB-C, up from the 23W charging capacity of the Pixel 9a. Wireless charging is supported at 7.5W for the Pixel 9a, 10W for the Pixel 10a, and 15W (magnetic) for the Pixel 10.

Image Credits: Ivan Mehta

The battery capacity and faster charging speed are helpful as the battery lasts easily throughout the day, including for regular apps, a few hours of video watching, and light gaming. Plus, the brighter display makes the device better for all-around experience in different lighting conditions. Yes, the 10a has chunkier bezels than its more costly cousins, but they don’t make too much of a difference in daily use. After all, you’re getting the device for a much lower price than a flagship.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

Advertisement

The Pixel 10a uses the Tensor G4 chip, which was also used in the Pixel 9a. That means there are no performance gains this year, which you might notice if you switch between a lot of apps. Because of the older chip and its 8GB RAM combo, the Pixel 10a can’t run the updated Gemini Nano AI model, which means it has fewer on-device AI features than the Pixel 10a series.

The display is bright, but there are thick bezels around it Image Credits: Ivan Mehta

The feature list not available on the Pixel 10a includes notification summaries, the Pixel screenshot app, Magic Cue (a feature that offers contextual suggestions across apps like Gmail, Messages, and Maps), call notes, and on-device call translation.

The phone features a 48-megapixel main camera and a 13-megapixel wide-angle camera, which is the same as last year’s device. The main camera performs fine for most conditions, even in low light. But given the older and smaller sensor on the the wide-angle lens, it tends to lose some details, and it doesn’t have autofocus.

The Pixel 10a has a camera coach AI feature that can guide you in taking a shot of an object by helping frame it better in the viewfinder. There is also Auto Best Take, which merges photos to create the best composite from a bunch of shots — useful when photographing a group. The phone also has support for up to 8x super-res zoom, but the processing and quality aren’t as good as the Pixel 10, which offers up to 100x zoom through this feature.

Notably, some AI features might make it to the Pixel 10a through a Pixel Drop, Google’s periodic software updates that** often bring new capabilities to older models.

Advertisement

Google offers seven years of software updates with this device, which is crucial for receiving both operating system updates, along with feature drops and security updates. While this is not Pixel 10a exclusive, the phone has a quick share feature that now works with Apple’s AirDrop. This means I could simply transfer photos, just like I did for this story, to my MacBook within a few taps. Previously, I had to connect the Pixel 10a to my MacBook with a USB-C cable.

At $499, good battery life, a bright display, and faster charging are the main things going in favor of the Pixel 10a. For this price, the phone offers good value for money in a light and flat design. However, if you already have last year’s Pixel 9a, there is no reason to change. Also worth considering: the Nothing phone 4a Pro, also at $499, poses tough with better specifications, such as a bigger and brighter screen, a more capable Qualcomm processor, a dedicated telephoto lens, and faster charging speeds of 50W.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025