Tech
AYANEO Pocket S Mini Proves Size Doesn’t Stop It from Being Possibly the Best Retro Handheld

AYANEO has just launched pre-orders for a handheld gaming powerhouse that fits comfortably into a jacket pocket. The Pocket S Mini is only 6.6 inches long, 3.1 inches wide, and 0.73 inches deep, weighing a mere 10.8 ounces, but despite its size, it refuses to compromise.
The 4.2-inch LCD screen takes center stage, with its 1280 x 960 resolution and 4:3 aspect ratio bringing back memories of classic games from the PlayStation 1, Nintendo 64, and Dreamcast. Emulators display those titles just as you remember them, with no ugly black bars or stretched-out pixels to mar the experience. Modern mobile games operate quite well, but the screen clearly references older titles.
This compact beast is powered by Qualcomm’s powerful Snapdragon G3x Gen 2 chip, which features an 8-core CPU and Adreno A32 graphics built on an efficient 4nm technology. The top model includes up to 16GB of ultra fast LPDDR5x RAM and UFS 4.0 storage, resulting in a combo that tackles demanding retro emulation with ease, all while remaining cool thanks to a built-in fan. Hours of vintage gameplay feel effortless, and even recent Android games run smoothly.
The controls are top-notch, with Hall-effect joysticks that include beautiful RGB lighting and provide incredibly smooth, drift-free movement. Hall-effect triggers for when you need a bit more precision, and crystal-textured face buttons that provide exactly the appropriate amount of feedback when pressed. Add in some dual linear vibration motors that allow you to feel the rumble in your games (if they support it), a six-axis gyroscope to aid in motion controls, and a set of stereo speakers that produce clear sound despite their compact size.

The AYANEO Pocket S Mini has a very respectable 6,000mAh battery, allowing you to play for longer than you might anticipate from such a little device. AYANEO deliberately designed the system to balance performance and battery longevity, allowing you to keep playing without worrying about running out of charge. The USB-C port enables speedy charging, data transfer, and even connection to a TV or external monitor, while the microSD card reader allows for instant storage expansion at speeds of up to 100MB/S.

One thing that stands out is how strong the construction is. A glass front panel meets a metal mid-frame that resembles a tank. The colors available are Obsidian Black, Ice Soul White, and Retro Power, which looks just like the classic NES. Android 14 is under the hood, and AYANEO has incorporated some custom adjustments to make navigation and game launch feel natural and intuitive on the little screen. A fingerprint sensor embedded into the power button allows you to enter and exit quickly and easily.

Storage and memory options are also flexible, with the base model including 8GB RAM and 128GB storage for $319, a middle-tier with 12GB RAM and 256GB storage for $399, and the top configuration with 16GB RAM and 512GB storage for $479. Only the top version receives the Retro Power hue.
[Source]
Tech
The best digital frames for 2026
A digital photo frame shouldn’t be complicated. At its best, it’s just a good-looking screen that can be set up quickly that reliably shows the photos you care about. Unfortunately, that’s not always how things play out. The market is flooded with cheap digital frames that promise simplicity but end up delivering washed-out displays, clunky apps and a frustrating experience — leading you to abandon it after a week.
That’s a shame, because a good digital frame can be really enjoyable. Most of us have thousands of photos sitting on our phones that never make it beyond the camera roll, even though they’re exactly the kind of moments worth seeing every day. A solid frame gives those images a permanent home, whether it’s family photos cycling in the living room or shared albums updating automatically for relatives across the country. We’ve tested a range of smart photo frames to separate the genuinely useful options from the forgettable junk, and these are the ones that are actually worth putting on display.
Best digital picture frames for 2026
Using an Aura frame felt like the company looked at the existing digital photo frame market and said “we have to be able to do better than this.” And they have. The Carver Mat is extremely simple to set up, has a wonderful screen, feels well-constructed and inoffensive and has some smart features that elevate it beyond its competitors (most of which don’t actually cost that much less).
The Carver Mat reminds me a little bit of an Amazon Echo Show in its design. It’s a landscape-oriented device with a wide, angled base that tapers to a thin edge at the top. Because of this design, you can’t orient it in portrait mode, like some other frames I tried, but Aura has a software trick to get around that (more on that in a minute). The whole device is made of a matte plastic in either black or white that has a nice grip, doesn’t show fingerprints and just overall feels like an old-school photo frame.
The 10.1-inch display is the best I’ve seen on any digital photo frame I’ve tested. Yes, the 1,280 x 800 resolution is quite low by modern standards, but it provides enough detail that all of my photos look crisp and clear. Beyond the resolution, the Carver’s screen has great color reproduction and viewing angles, and deals well with glare from the sun and lights alike. It’s not a touchscreen, but that doesn’t bother me because it prevents the screen from getting covered in fingerprints — and the app takes care of everything you need so it’s not required.
One control you will find on the frame is a way to skip forwards or backwards through the images loaded on it. You do this by swiping left or right on the top of the frame; you can also double-tap this area to “love” an image. From what I can tell, there’s no real utility in this aside from notifying the person who uploaded that pic that someone else appreciated it. But the swipe backwards and forwards gestures are definitely handy if you want to skip a picture or scroll back and see something you missed.
Setting the frame up was extremely simple. Once plugged in, I just downloaded the Aura app, made an account and tapped “add frame.” From there, it asked if the frame was for me or if I was setting it up as a gift (this mode lets you pre-load images so the device is ready to go as soon as someone plugs it in). Adding images is as simple as selecting things from your phone’s photo library. I could see my iPhone camera roll and any albums I had created in my iCloud Photos library, including shared albums that other people contribute to. You can also connect your Google Photos account and use albums from there.
One of the smartest features Aura offers is a continuous scan of those albums — so if you have one of your kids or pets and regularly add new images to it, they’ll show up on your frame without you needing to do anything. Of course, this has the potential for misuse. If you have a shared album with someone and you assign it to your Aura frame, any pictures that someone else adds will get shared to your frame, something you may not actually want. Just something to keep in mind.
My only main caveat for the Carver Mat, and Aura in general, is that an internet connection is required and the only way to get photos on the device is via the cloud. There’s a limited selection of photos downloaded to the device, but the user has no control over that, and everything else is pulled in from the cloud. Aura says there are no limits on how many images you can add, so you don’t need to worry about running out of storage. But if you don’t want yet another device that needs to be online all the time, Aura might not be for you. Most other frames I tested let you directly load photos via an SD card or an app.
The Aura app also lets you manage settings on the frame like how often it switches images (anywhere from every 30 seconds to every 24 hours, with lots of granular choices in between) or what order to display photos (chronologically or shuffled). There’s also a “photo match” feature, which intelligently handles the issue of having lots of images in both portrait and landscape orientation. Since the Carver Mat is designed to be used in landscape, the photo match feature makes it so portrait pictures are displayed side-by-side, with two images filling the frame instead of having black bars on either side. It also tries to pull together complementary pairs of images, like displaying the same person or pulling together two pics that were shot around the same time.
Overall, the Carver Mat checks all the boxes. Great screen, simple but classy design, a good app, no subscription required. Yes, it’s a little more expensive than some competing options, but all the cheaper options are also noticeably worse in a number of ways. And if you don’t want a mat, there’s a standard Carver that costs $149 and otherwise has the same features and specs as the Caver Mat I tested.
- High-quality display with minimal reflections
- App makes set-up and management of your photos simple
- You can store an unlimited number of pictures in Aura’s cloud
- Good integration with Apple iCloud Photos and Google Photos
- Elegant, well-constructed design
- Smartly displays two portrait photos side-by-side on the landscape display
- No subscription required
- A little pricey
- Aura’s app and cloud are the only way to get photos on the frame
- Can’t be set up in portrait orientation
If you’re looking to spend less, PhotoSpring’s Classic Digital Frame is the best option I’ve seen that costs less than $100 (just barely at $99). The PhotoSpring model comes with a 10.1-inch touchscreen with the same 1,280 x 800 resolution as the Carver Mat. The screen is definitely not as good as the Carver, though, with worse viewing angles and a lot more glare from light sources. That said, images still look sharp and colorful, especially considering you’re not going to be continuously looking at this display.
PhotoSpring’s frames are basically Android tablets with some custom software to make them work as single-purpose photo devices. That means you’ll use the touchscreen to dig into settings, flip through photos and otherwise manipulate the device. Changing things like how often the frame changes images can’t be done in the app. While doing things on the frame itself are fine, I prefer Aura’s system of managing everything on the app.
However, PhotoSpring does have a good advantage here: you can pop in a microSD card or USB drive to transfer images directly to the frame, no internet connection required. You can also use the PhotoSpring app to sync albums and single images as well, which obviously requires the internet. But once those pics have been transferred, you’re good to go. Additionally, you can upload pictures on a computer via the PhotoSpring website or sync Google Photos albums.
As for the PhotoSpring hardware itself, it looks good from the front, giving off traditional photo frame vibes. The back is rather plasticky and doesn’t feel very premium, but overall it’s fine for the price. There’s an adjustable stand so you can set the frame up in portrait or landscape mode, and you can set the software to crop your photos or just display them with borders if the orientation doesn’t fit.
PhotoSpring also has a somewhat unusual offering: a frame that has a rechargeable battery. The $99 model just uses AC power, but a $139 option lets you unplug the frame and pass it around to people so they can swipe through your photos albums on the device. This feels like a niche use case, and I think most people will be better served saving their $40, but it’s something to consider.
One of my favorite things about PhotoSpring is that they don’t nickel-and-dime you with subscription services. There aren’t any limits on how many images you can sync, nor are things like Google Photos locked behind a paywall. The combo of a solid feature set, a fine display and a low entry price point make the PhotoSpring a good option if you want to save some cash.
- Solid display
- Works in portrait or landscape orientation
- Lets you load pictures from multiple sources, including the PhotoSpring app, an SD card, USB drive or via Google Photos
- Inoffensive design
- No subscription required
- Touchscreen controls mean the display is prone to picking up fingerprints
- Display picks up more reflections than the Aura
- Feels a little cheap
- Software isn’t the most refined
If you want a device that works great as a digital photo frame that can do a lot more than the above options, consider Google’s Nest Hub Max. It has a 10-inch touchscreen with a 1,280 x 800 resolution and can connect to a host of Google services and other apps to help you control your smart home devices. It also works great for playing videos from YouTube or other services, or streaming music thanks to its large built-in speaker. At $229, it’s significantly more expensive than our other options, but there’s no question it can do a lot more.
From a photos perspective, you’ll need to use Google Photos. If you’re not already using the app, switching your library over might be too much of a task to make it worthwhile. But if you do use Google Photos, signing in with your Google account when you set up the Hub Max makes accessing your images quite simple. You can pick specific albums, have it stream your entire library or pull things from various recommendations it offers up.
Once that’s set up, you can customize the slideshow as you’d expect — I set mine to come up by default after the Hub Max was dormant for a few minutes. I also removed everything from the display except the photos. By default, it shows you a clock and the weather forecast, but I wanted to just focus on the pictures. I do like the option to show a little more info, though.
As for the screen itself, it has the same relatively low resolution of the other digital photo frames I tried, but it handles glare very well. And the built-in ambient light sensor automatically adjusts brightness and color temperature, which I enjoy. It keeps the Hub Max from feeling like an overly bright screen blasting you with light; it recedes into the background well.
Of course, the Nest Hub Max has a lot of voice-activated tricks via the Google Assistant. My big question is how long the Hub Max will be supported, as Google is clearly planning to phase out the Assistant in favor of Gemini, and I’m not convinced that the Hub Max will ever support that new AI-powered tool. Beyond the Assistant, you can get a variety of apps on it like Netflix and YouTube, stream music from a bunch of apps, see video from your Nest Cam or make video calls via the built-in camera.
If you’re going to buy a Nest Hub Max, it shouldn’t be just for its digital photo frame features, even though those are quite solid. It’s best for someone well-entrenched in the Google ecosystem who wants a more multi-purpose device. If you fit the bill, though, the Nest Hub Max remains a capable device, even though it’s been around for almost five years.
- Good display quality with auto-brightness and warmth settings
- Getting images on it is a piece of cake, provided you use Google Photos
- Plenty of ways to control smart home devices
- Good-sounding speaker
- Almost five years old
- Google Assistant’s days are likely numbered
- More expensive than a standard digital photo frame
The Aura Aspen frame is a step-up from our top pick in terms of overall quality and, unfortunately but predictably, price. For $229, you get a 1,600 x 1,200 resolution, 11.8-inch display that supports 169 pixels per inch, and the frame can be positioned in either portrait or landscape mode. There’s a physical button and touchbar on the frame’s edge that let’s you swipe through photos or change what’s currently displayed, but you can also do that remotely with Aura’s mobile app. All of the same great app features present in the Carver are here for the Aspen, including inviting others to contribute photos to your frame. The kicker here, like with all Aura frames, is the lack of a subscription necessary to keep your frame filled to the brim with updated photos. That alone may be worth paying the higher price tag for some when picking out a frame you want to be able to use freely for years to come. — Valentina Palladino, Deputy Editor
- Elegant design
- 1,600 x 1,200 resolution display
- Easy-to-use Aura app
- Can invite others to add photos via mobile app
- No subscription required
What to look for in digital picture frames
While a digital photo frame feels like a simple piece of tech, there are a number of things I considered when trying to find one worth displaying in my home. First and foremost was screen resolution and size. I was surprised to learn that most digital photo frames have a resolution around 1,200 x 800, which feels positively pixelated. (That’s for frames with screen sizes in the nine- to ten-inch range, which is primarily what I considered for this guide.)
But after trying a bunch of frames, I realized that screen resolution is not the most important factor; my favorite photos looked best on frames that excelled in reflectivity, brightness, viewing angles and color temperature. A lot of these digital photo frames were lacking in one or more of these factors; they often didn’t deal with reflections well or had poor viewing angles.
A lot of frames I tested felt cheap and looked ugly as well, which isn’t something you want in a smart device that sits openly in your home. That includes lousy stands, overly glossy plastic parts and design decisions I can only describe as strange, particularly for items that are meant to just blend into your home. The best digital photo frames don’t call attention to themselves and look like an actual “dumb” frame, so much so that those that aren’t so tech-savvy might mistake them for one.
Perhaps the most important thing outside of the display, though, is the software. Let me be blunt: a number of frames I tested had absolutely atrocious companion apps and software experiences that I would not wish on anyone. One that I tried did not have a touchscreen, but did have an IR remote (yes, like the one you controlled your TV with 30 years ago). Trying to use that with a Wi-Fi connection was painful, and when I tried instead to use a QR code, I was linked to a Google search for random numbers instead of an actual app or website. I gave up on that frame, the $140 PixStar, on the spot.
Other things were more forgivable. A lot of the frames out there are basically Android tablets with a bit of custom software slapped on the top, which worked fine but wasn’t terribly elegant. And having to interact with the photo frame via touch wasn’t great because you end up with fingerprints all over the display. The best frames I tried were smart about what features you could control on the frame itself vs. through an app, the latter of which is my preferred method.
Another important software note: many frames I tried require subscriptions for features that absolutely should be included out of the box. For example, one frame would only let me upload 10 photos at a time without a subscription. Others would let you link a Google Photos account, but you could only sync a single album without paying up. Yet another option didn’t let you create albums to organize the photos that were on the frame — it was just a giant scroll of photos with no way to give them order.
While some premium frames offer perks like unlimited photos or cloud storage, they often come at a cost. I can understand why certain things might go under a subscription, like if you’re getting a large amount of cloud storage, for example. But these subscriptions feel like ways for companies to make recurring revenue from a product made so cheaply they can’t make any money on the frame itself. I’d urge you to make sure your chosen frame doesn’t require a subscription (neither of the frames I recommend in this guide need a subscription for any of their features), especially if you plan on giving this device as a gift to loved ones.
How much should you spend on a digital picture frame
For a frame with a nine- or ten-inch display, expect to spend at least $100. Our budget recommendation is $99, and all of the options I tried that were cheaper were not nearly good enough to recommend. Spending $150 to $180 will get you a significantly nicer experience in all facets, from functionality to design to screen quality.
Digital frames FAQs
Are digital photo frames a good idea?
Yes, as long as you know what to expect. A digital picture frame makes it easy to enjoy your favorite shots without printing them. They’re especially nice for families who want to display new photos quickly. The key is understanding the limitations. Some frames have lower resolution displays or need a constant Wi-Fi connection to work properly, so they’re not a perfect replacement for a high-quality print on the wall. But if you want a simple way to keep memories on display and up to date, they’re a solid choice.
Can you upload photos to a digital frame from anywhere?
Most modern digital frames let you do this, but it depends on the model. Many connect to Wi-Fi and use apps, cloud storage or email uploads, so you can add photos from your phone no matter where you are. Some even let family members share directly, which is great for keeping grandparents updated with new pictures. That said, a few budget models only work with USB drives or memory cards, so check how the frame handles uploads before buying.
Georgie Peru contributed to this report.
Tech
Google’s Android PC dream may take longer than expected
Last year, it was revealed that Google is working on a new range of Android PCs powered by a new operating system called Aluminium OS, and a while back, we also learned that these PCs might ship with a barebones version of the Pixel Camera app. However, the actual PC itself might take a bit longer to arrive. As reported by The Verge, a detailed report around Google’s internal Project Aluminium suggests the Android-based PC operating system isn’t close to launch.

While Google has talked about combining Android and ChromeOS into a more unified platform, court filings and internal timelines indicate a full public release may not happen until 2028, with limited testing possibly starting earlier. The delay isn’t just technical. It’s also strategic. Google still has to figure out how an Android PC OS fits alongside ChromeOS, which already powers millions of Chromebooks, especially in schools and enterprise environments. And ChromeOS isn’t disappearing anytime soon.
Why Aluminium may take years to land
According to testimony and internal documents cited in the reporting, Google plans to maintain ChromeOS support for up to 10 years on existing devices, potentially stretching into the early 2030s. That means two platforms could coexist for a long time. Some older Chromebooks may not even be able to upgrade to Aluminium due to hardware limits, forcing Google to support parallel systems longer than planned.

That overlap creates messy questions. Should partners ship ChromeOS or Android-for-PC? Will apps work the same across both? And how do developers prioritize one platform without fragmenting the ecosystem? Even basic expectations like keyboard, mouse, and multi-window workflows require bigger changes than Android’s current tablet mode can offer. Further, legal and business complications add another wrinkle. The documents show Google’s laptop OS strategy intersects with ongoing antitrust scrutiny and Play Store rules, which could affect how tightly Google bundles its apps and services on Aluminium devices.

In other words, even if the software is ready, how it’s packaged and distributed may be controversial. For buyers, the takeaway is simple: Android laptops aren’t around the corner. ChromeOS will remain Google’s main PC platform for years, and Aluminium looks more like a long-term evolution than an imminent replacement. When it does arrive, expect a transition period, not an overnight shift. If you’re considering a Chromebook or waiting for an Android-native PC, it’s worth keeping expectations grounded.
Tech
A16z just raised $1.7B for AI infrastructure. Here’s where it’s going.
Andreessen Horowitz just raised a whopping new $15 billion in funding. And a $1.7 billion chunk of that is going to its infrastructure team, the one responsible for some of its biggest, most prominent AI investments including Black Forrest Labs, Cursor, OpenAI, ElevenLabs, Ideogram, Fal and dozens of others.
A16z general partner with the infra team Jennifer Li (who oversees such investments as ElevenLabs – just valued at $11 billion); Ideagram and Fal, has a clear thesis on where the team is looking to spend it’s latest chunk of cash.
Watch as Venture and Startups editor Julie Bort talks with Li on Equity about where a16z sees this AI super cycle going next, including the talent crunch hitting AI-native startups, why search infrastructure matters more than people think, and what kinds of companies are actually getting funded right now.
Subscribe to Equity on YouTube, Apple Podcasts, Overcast, Spotify and all the casts. You also can follow Equity on X and Threads, at @EquityPod.
Tech
Meet two teens that prove that barbering is not an “old man’s trade”
Not your usual uncle job: These teens are barbers on the side
Everyone has a go-to person for their hair, and for a growing clientele, the go-to person for their next fade is not a veteran, but 19-year-old Sujaish Kumar or 14-year-old Keanu Akbar.
While it’s easy to dismiss their work as a hobby, these two students have built successful brands from the ground up, earning recognition online and off—endeavours that gave them purpose after completing their studies.
Vulcan Post speaks to Sujaish and Keanu to find out how they paved their own way for themselves and other younger barbers in the old school trade.
Both of them picked up barbering by watching online videos
Getting a decent haircut was often a nightmare for Sujaish. He shared that ‘good barbers’ often charged S$30—out of budget for him and his friends in secondary school—leaving them to patronise shops that provided S$10-S$12 haircuts.
Unfortunately, keeping to that budget mean that the haircuts often came out uneven and messy. Frustrated, Sujaish decided to take matters into his own hands, challenging himself to provide better haircuts. The next day, he started watching tutorials on YouTube and TikTok.
“I had about S$50 saved up, so I just spent that on a pair of clippers. Then I basically had to beg my friends to let me cut their hair.” But even with having the right tools, the execution turned out to be harder than he thought.
“The first few haircuts were really really bad, uneven, and kind of demotivating,” Sujaish sheepishly shared, but that didn’t stop him from continuing to hone his craft, providing free haircuts for family, friends and their acquaintances at his HDB corridor.
Through word-of-mouth, Sujaish eventually gained a sizable following. Five months after he started, his cuts were good enough for him to charge S$5, which soon rose to S$8 per head. He also began promoting his services on TikTok, and one viral reel further grew his clientele base.
Even though he had to move his chair to his home upon receiving orders from the Housing Development Board (HDB), he continued making a name for himself online as a young barber, inspiring others, like Keanu, to do the same.
At the tender age of 12, Keanu was encouraged by his older brother to pick up barbering as a hobby, and he got him a set of tools to help him get started. As it turns out, Keanu’s older brother is friends with Sujaish himself, and that inspired his own suggestion.
Keanu shared a similar learning trajectory: picking up his skills from online tutorials and offering free haircuts for his family and friends to sharpen them. He started charging just S$3 per haircut after a year of practice and performed his services on a staircase at his HDB in Clementi. Word soon spread of his services, which helped him land an interview with local publication AsiaOne, which put his name out to the masses.
But beyond those opportunities, how did they actually sustain beyond the hype—one as long as their veteran seniors?
Scaling up a word-of-mouth service
While the traditional, word-of-mouth method got them started, both Sujaish and Keanu quickly diversified their reach, leaning heavily into social media, particularly TikTok, to scale up.
Going viral is not an easy feat, but Sujaish achieved just that with a video titled ‘How much I make as a 17-year-old barber in Singapore,’ where he earned S$195 in a single day. This financial transparency not only drew netizens but also attracted Singaporean news outlets Mothership and CNA.
Similarly, Keanu’s interview with Asiaone put him on the radar for more clients. The 14-year-old shared that since the interview, his Telegram subscribers doubled from around 150 to 350, and his TikTok following tripled from approximately 400-500 to 1,200.
However, viral success came with unforeseen challenges. Sujaish’s video caught the attention of the HDB, who informed him he could no longer operate in the corridor due to potential disturbance to neighbours, prompting him to move operations back inside his home.
Despite the change in settings, Sujaish continued to build his brand and reinvest his earnings for better tools and setups. He has also since raised his starting price to S$30 per haircut and started receiving requests from customers for house calls, where he could get paid a higher price of S$50.
This additional revenue stream gave Sujaish enough funds to open his own studio at Potong Pasir, which was around 100 sqft, or equivalent to a master bedroom in an HDB flat. While moving to a studio resulted in him forking out more than S$1,000 for flooring, rental and upgrades, he believes it was a gamble worth taking.
“Even from when I started, my goal was always to have my own private area where I could do my haircuts, and cutting hair at home was disturbing my family.”
Keanu has also moved his workstation to his home, not because he was told to, but out of a personal desire to provide a more comfortable experience for his clients. “Usually when it rains, I have to cancel my appointments because [my workspace] will get very wet and then people won’t like it.”
Beyond the fade
Aside from being able to earn from their side hustle, the trade has also instilled skillsets and qualities that can be used beyond barbering.
A self-proclaimed introvert, Keanu shared that picking up barbering helped him to gain confidence in engaging with strangers. “When I’m with my friends, I talk a lot. But other than that, I was really quiet in Primary school.”
Time management was also another skill he gained. He shared that he dedicates two and a half hours on selected weekdays and six hours on weekends, with the remainder of the time spent on his studies and leisure with family and friends.
“Maybe I’m not living like the full 14-year-old, but I don’t mind it.”
For Sujaish, barbering has allowed him to learn the foundations of building a business, from marketing himself to learning the operations. These allowed him to have an ambition to work towards opening a full-fledged barbershop and even starting a haircare brand.
Overall, both of them showed a new age of barbers that bring modern trends and tactics to a trade once seen as an “old man’s job” into a career still relevant in the modern world.
- Learn about our protagonists here:
- Read more stories we’ve written on Singaporean businesses here.
Featured Image Credit: Sujaish Kumar/ Keanu Akbar
Tech
Games Done Quick’s Back to Black 2026 event kicks off tomorrow
Hot on the heels of AGDQ in January, Games Done Quick is hosting its second speedrunning event of the year, Back to Black 2026, starting tomorrow, February 5. The four-day event is organized by Black in a Flash and is raising money for Race Forward, a nonprofit that works across communities to address systemic racism.
Back to Black is timed to the start of Black History Month and highlights the deep bench of talent in the Black speedrunning community. A few runs, like ones for Hades II, Donkey Kong Country and Silent Hill 4, were teased when Back to Black 2026 was announced last year. The full schedule has plenty of other runs worth checking out, though, like a co-op run through Plants vs Zombies: Replanted on February 5 or an Any% run of The Barbie Diaries: High School Mystery on February 6.
Back to Black 2026 will be live on Games Done Quick’s Twitch and YouTube channels from Thursday, February 5 through Sunday February 8.
Tech
Tinder looks to AI to help fight ‘swipe fatigue’ and dating app burnout
Tinder is turning to a new AI-powered feature, Chemistry, to help it reduce so-called “swipe fatigue,” a growing problem among online dating users who are feeling burned out and are in search of better outcomes.
Introduced last quarter, the Match-owned dating app said that Chemistry leverages AI to get to know users through questions and, with permission, accesses their Camera Roll on their phone to learn more about their interests and personality.
On Match’s Q4 2026 earnings call, one analyst from Morgan Stanley asked for an update on the product’s success so far.
Match CEO Spencer Rascoff noted that Chemistry was still only being tested in Australia for the time being, but said that the feature offered users an “AI way to interact with Tinder.” He explained that users could choose to answer questions to then “get just a single drop or two, rather than swiping through many, many profiles.”
In addition to Chemistry’s Q&A and Camera Roll features, the company plans to use the AI feature in other ways going forward, the CEO also hinted.
Most importantly, Rascoff said the feature is designed to combat swipe fatigue — a complaint from users who say they have to swipe through too many profiles to find a potential match.
The company’s turn toward AI comes as Tinder and other dating apps have been experiencing paying subscriber declines, user burnout, and declines in new sign-ups.
Techcrunch event
Boston, MA
|
June 23, 2026
In the fourth quarter, new registrations on Tinder were still down 5% year-over-year, and its monthly active users were down 9%. These numbers show some slight improvements over prior quarters, which Match attributes to AI-driven recommendations that change the order of profiles shown to women, and other product experiments.
Match said that this year, it aims to address common Gen Z pain points, including better relevance, authenticity and trust. To do so, the company said it is redesigning discovery to make it less repetitive and is using other features, like Face Check — a facial recognition verification system — to cut down on bad actors. On Tinder, the latter led to a more than 50% reduction in interactions with bad actors, Match noted.
Tinder’s decision to start moving away from the swipe toward more targeted, AI-powered recommendations could have a significant impact on the dating app. Today, the swipe method, which was popularized by Tinder, encourages users to think that they’re choosing a match from an endless number of profiles. But in reality, the app presents the illusion of choice, since matches have to be two-way to connect, and even then, a spark is not guaranteed.
The company delivered an earnings beat in the fourth quarter, with revenue of $878 million and EPS of 83 cents per share above Wall Street estimates. But weak guidance saw the stock decline on Tuesday, before rising again in premarket trading on Wednesday.
Beyond AI, Match will also increase its product marketing to help boost Tinder engagement. The company is committing to $50 million in Tinder marketing spend, which will include creator campaigns on TikTok and Instagram, where users will make claims that “Tinder is cool again,” Rascoff noted.
Tech
How Researchers Are Putting Students at the Center of Edtech Design
When researchers ask students to test educational technology products, a consistent pattern emerges: Tools that impress adults in demos often fall flat with the students who actually use them. Recent studies show that even well-designed products can frustrate students or create unnecessary mental strain when technical complexity gets in the way of learning. The disconnect means even promising tools aren’t reaching their full potential in real classrooms.
This gap between adult expectations and student experience is exactly what ISTE+ASCD, the Joan Ganz Cooney Center at Sesame Workshop and the youth research organization In Tandem aim to close through their collaborative work on student usability in edtech.
EdSurge spoke with three leaders from this collaborative effort: Vanessa Zuidema, co-founder and director of customer success at In Tandem; Dr. Medha Tare, senior director of research at the Joan Ganz Cooney Center; and Dr. Brandon Olszewski, senior director of research and innovation at ISTE+ASCD.
“To help clarify what matters most when it comes to student usability, we knew we needed to work with these partners to reach students, check our findings against others in the space and develop guidance for edtech providers,” Olszewski explains. “Sesame has extensive experience designing for young people and balancing high-quality learning with engagement. In Tandem connects young people with companies and organizations that need their voices at the table. ISTE+ASCD sits at the intersection of educational technology, learning design, and curriculum and instruction.”
Ahead of releasing a formal student usability framework later this year, the three organizations shared early findings about what students actually want from educational technology — and what it means for schools and developers.
EdSurge: Why focus specifically on student usability, and what does that mean in practice?
Tare: The field is very good at evaluating edtech from an adult perspective: alignment, evidence, safety, interoperability. But none of those frameworks capture what it’s like to be a kid trying to use a tool in real time.
In our research with students and product developers, we often saw cognitive load issues: students struggle with instructions, navigation or unclear affordances. We saw motivation issues: kids shut down when a feature feels intimidating or frustrating. Many existing evaluations don’t examine how struggling, multilingual or reluctant readers experience the same product quite differently.
Zuidema: While districts, school leaders and teachers all play critical roles, ultimately the student experience determines whether learning actually happens. Yet too often, product development processes overlook the people most affected: students themselves.
How does centering student voice change the way edtech products are designed?
Tare: You can count on young people to surface things adults would never catch. Kids are the experts in fun, not adults! In one case, an AI writing companion talked too much, repeated questions and “felt like a bot” to kids. Students redesigned the personality system to be less chatty, more responsive and more playful, and engagement shot up the next day.
In another case, developers initially assumed a read-aloud feature would help with assessment, but kids were often too anxious or unsure to speak. Student discomfort fundamentally shifted how developers approached assessment supports.
Zuidema: When you center student voice, you learn things about an edtech tool that adults simply can’t see. Testing early ideas with students helps product teams figure out if things like onboarding or screen design actually work before a tool is used in real classrooms. This keeps teams from building features based on adult guesses and saves them from costly rebuilds.
One example is customization. Adults often assume students want lots of choices in how everything looks. But many students say they prefer simple, steady designs and want more control over their learning path instead.
Olszewski: I’m generalizing here, but what we heard is that they don’t care about chatbots, and they don’t want to do anything for school on their phones except check due dates. I think these insights offer edtech providers some solid guidance on how to spend their energy when developing products.
What do students want from edtech?
Olszewski: Students want a clean user interface that feels intuitive, as if it were actually tested by real students. They don’t care about a lot of add-ons, advanced customization, badges and points. Instead, they want clear learning progressions that show them what’s next. They want to see language and scenarios that reflect who they are.
Zuidema: Students want tools that are simple to use, don’t waste time and feel made for how they actually learn. They want tools that let them move at their own pace and get feedback that actually makes sense.
Tare: Students want feedback that feels human and helpful: timely, specific, supportive and aligned to where they are in the process. For example, kids told one writing tool not to give grammar feedback while they were still generating ideas because it felt disruptive and demotivating. They want characters and tools that react to them in joyful, surprising ways. And they want tools that respect their intelligence: kids reject infantilizing features and lean into tools that challenge them while also supporting them.
What does it take to do rigorous, ethical student-centered usability research?
Zuidema: Conducting rigorous research with students starts with creating spaces where young people feel safe enough to be honest. When that trust is in place, they move beyond polite answers and offer the kind of deeper feedback that improves programs and products.
Organizations partner most effectively when they start with a clear sense of what they hope to learn and how they plan to use those insights. When students feel safe and respected, they offer the kind of honest, deeper insight that strengthens the work.
Tare: We recommend genuine youth partnership, not tokenism: Kids need time to build relationships, trained facilitators and multiple sessions to share deeper feedback. And there needs to be a willingness to change course: Product teams need to be ready to iterate, and sometimes to do so fundamentally. Kids are experts! We need to listen.
Olszewski: Young people under 18 rightfully are afforded special protections through Institutional Review Boards. Coordinating with the right organizations that have streamlined that work helps responsible research partners get right to the work of actually collecting data. That’s so helpful when the people we want to learn from don’t yet have a driver’s license!
How should school leaders evaluate edtech through the lens of student usability?
Olszewski: We know that alignment to standards and evidence supporting better student learning outcomes are top of mind — and those priorities can sometimes overshadow other important factors. We believe that products designed for usability, both for teachers and students, are more likely to improve teaching and learning. Our forthcoming student usability framework will provide concrete criteria for evaluating these factors. If your sandbox account of a product offers a jumbled user experience without a clear learning progression, that’s a signal it might not work well in practice.
Tare: Student usability should be given strong consideration. We advise school leaders to ask questions such as: Can students independently navigate the tool? Do multilingual learners and struggling readers experience friction? Does the tool maintain motivation, or diminish it? How does feedback feel to a child: supportive or punitive? This approach helps leaders choose tools that work for the students they actually serve.
Learn more: ISTE+ASCD’s student usability framework will be released later this year. In the meantime, educators and edtech decision-makers can explore ISTE’s Teacher Ready Evaluation Tool and related resources at iste.org/edtech-product-selection.
Tech
Snowflake and OpenAI forge $200M enterprise AI partnership

Snowflake and OpenAI have struck a multi-year, $200 million partnership to bring OpenAI’s advanced models, including GPT-5.2, directly into Snowflake’s enterprise data platform. The collaboration is designed to let Snowflake’s large customer base, more than 12,000 organisations, build AI agents and semantic analytics tools that operate on their own data without moving it outside Snowflake’s governed environment. Under the agreement, OpenAI models will be natively embedded in Snowflake Cortex AI and Snowflake Intelligence, making it possible to run queries, derive insights, and deploy AI-powered workflows using natural language interfaces and context-aware agents. Customers can analyse structured and unstructured data, automate…
This story continues at The Next Web
Tech
As Software Stocks Slump, Investors Debate AI’s Existential Threat
Investors were assessing on Wednesday whether a selloff in global software stocks this week had gone too far, as they weighed if businesses could survive an existential threat posed by AI. The answer: It’s unclear and will lead to volatility. From a report: After a broad selloff on Tuesday that saw the S&P 500 software and services index fall nearly 4%, the sector slipped another 1% on Wednesday. While software stocks have been under pressure in recent months as AI has gone from being a tailwind for many of these companies to investors worrying about the disruption it will cause to some sectors, the latest selloff was triggered by a new legal tool from Anthropic’s Claude large language model (LLM).
The tool – a plug-in for Claude’s agent for tasks across legal, sales, marketing and data analysis – underscored the push by LLMs into the so-called “application layer,” where these firms are increasingly muscling into lucrative enterprise businesses for revenue they need to fund massive investments. If successful, investors worry, it could wreak havoc across a range of industries, from finance to law and coding.
Tech
Vercel rebuilt v0 to tackle the 90% problem: Connecting AI-generated code to existing production infrastructure, not prototypes
Before Claude Code wrote its first line of code, Vercel was already in the vibe coding space with its v0 service.
The basic idea behind the original v0, which launched in 2024, was essentially to be version 0. That is, the earliest version of an application, helping developers solve the blank canvas problem. Developers could prompt their way to a user interface (UI) scaffolding that looked good, but the code was disposable. Getting those prototypes into production required rewrites.
More than 4 million people have used v0 to build millions of prototypes, but the platform was missing elements required to get into production. The challenge is a familiar one with vibe coding tools, as there is a gap in what tools provide and what enterprise builders require. Claude Code, for instance, generates backend logic and scripts effectively, but does not deploy production UIs within existing company design systems while enforcing security policies
This creates what Vercel CPO Tom Occhino calls “the world’s largest shadow IT problem.” AI-enabled software creation is already happening inside every enterprise. Credentials are copied into prompts. Company data flows to unmanaged tools. Apps deploy outside approved infrastructure. There’s no audit trail.
Vercel rebuilt v0 to address this production deployment gap. The new version, generally available today, imports existing GitHub repositories and automatically pulls environment variables and configurations. It generates code in a sandbox-based runtime that maps directly to real Vercel deployments and enforces security controls and proper git workflows while allowing non-engineers to ship production code.
“What’s really nice about v0 is that you still have the code visible and reviewable and governed,” Occhino told VentureBeat in an exclusive interview. “Teams end up collaborating on the product, not on PRDs and stuff.”
This shift matters because most enterprise software work happens on existing applications, not new prototypes. Teams need tools that integrate with their current codebases and infrastructure.
How v0’s sandbox runtime connects AI-generated code to existing repositories
The original v0 generated UI scaffolding from prompts and let users iterate through conversations. But the code lived in v0’s isolated environment, which meant moving it to production required copying files, rewriting imports and manually wiring everything together.
The rebuilt v0 fundamentally changes this by directly importing existing GitHub repositories. A sandbox-based runtime automatically pulls environment variables, deployments and configurations from Vercel, so every prompt generates production-ready code that already understands the company’s infrastructure. The code lives in the repository, not a separate prototyping tool.
Previously, v0 was a separate prototyping environment. Now, it’s connected to the actual codebase with full VS Code built into the interface, which means developers can edit code directly without switching tools.
A new git panel handles proper workflows. Anyone on a team can create branches from within v0, open pull requests against main and deploy on merge. Pull requests are first-class citizens and previews map directly to real Vercel deployments, not isolated demos.
This matters because product managers and marketers can now ship production code through proper git workflows without needing local development environments or handing code snippets to engineers for integration. The new version also adds direct integrations with Snowflake and AWS databases, so teams can wire apps to production data sources with proper access controls built in, rather than requiring manual work.
Vercel’s React and Next.js experience explains v0’s deployment infrastructure
Prior to joining Vercel in 2023, Occhino spent a dozen years as an engineer at Meta (formerly Facebook) and helped lead that company’s development of the widely-used React JavaScript framework.
Vercel’s claim to fame is that its company founder, Guillermo Rauch, is the creator of Next.js, a full-stack framework built on top of React. In the vibe coding era, Next.js has become an increasingly popular framework. The company recently published a list of React best practices specifically designed to help AI agents and LLMs work.
The Vercel platform encapsulates best practices and learnings from Next.js and React. That decade of building frameworks and infrastructure together means v0 outputs production-ready code that deploys on the same infrastructure Vercel uses for millions of deployments annually. The platform includes agentic workflow support, MCP integration, web application firewall, SSO and deployment protections. Teams can open any project in a cloud dev environment and push changes in a single click to a Vercel preview or production deployment.
With no shortage of competitive offerings in the vibe coding space, including Replit, Lovable and Cursor among others, it’s the core foundational infrastructure that Occhino sees as standing out.
“The biggest differentiator for us is the Vercel infrastructure,” Occhino said. “It’s been building managed infrastructure, framework-defined infrastructure, now self-driving infrastructure for the past 10 years.”
Why vibe coding security requires infrastructure control, not just policy
The shadow IT problem isn’t that employees are using AI tools. It’s that most vibe coding tools operate entirely outside enterprise infrastructure. Credentials are copied into prompts because there’s no secure way to connect generated code to enterprise databases. Apps deploy to public URLs because the tools don’t integrate with company deployment pipelines. Data leaks happen because visibility controls don’t exist.
The technical challenge is that securing AI-generated code requires controlling where it runs and what it can access. Policy documents don’t help if the tooling itself can’t enforce those policies.
This is where infrastructure matters. When vibe coding tools operate on separate platforms, enterprises face a choice: Block the tools entirely or accept the security risks. When the vibe coding tool runs on the same infrastructure as production deployments, security controls can be enforced automatically.
v0 runs on Vercel’s infrastructure, which means enterprises can set deployment protections, visibility controls and access policies that apply to AI-generated code the same way they apply to hand-written code. Direct integrations with Snowflake and AWS databases let teams connect to production data with proper access controls rather than copying credentials into prompts.
“IT teams are comfortable with what their teams are building because they have control over who has access,” Occhino said. “They have control over what those applications have access to from Snowflake or data systems.”
Generative UI vs. generative software
In addition to the new version of v0, Vercel has recently introduced a generative UI technology called json-render.
v0 is what Vercel calls generative software. This differs from the company’s json-render framework for a true generative UI. Vercel software engineer Chris Tate explained that v0 builds full-stack apps and agents, not just UIs or frontends. In contrast, json-render is a framework that enables AI to generate UI components directly at runtime by outputting JSON instead of code.
“The AI doesn’t write software,” Tate told VentureBeat. “It plugs directly into the rendering layer to create spontaneous, personalized interfaces on demand.”
The distinction matters for enterprise use cases. Teams use v0 when they need to build complete applications, custom components or production software.
They use JSON-render for dynamic, personalized UI elements within applications, dashboards that adapt to individual users, contextual widgets and interfaces that respond to changing data without code changes.
Both leverage the AI SDK infrastructure that Vercel has built for streaming and structured outputs.
Three lessons enterprises learned from vibe coding adoption
As enterprises adopted vibe coding tools over the past two years, several patterns emerged about AI-generated code in production environments.
Lesson 1: Prototyping without production deployment creates false progress. Enterprises saw teams generate impressive demos in v0’s early versions, then hit a wall moving those demos to production. The problem wasn’t the quality of generated code. It was that prototypes lived in isolated environments disconnected from production infrastructure.
“While demos are easy to generate, I think most of the iteration that’s happening on these code bases is happening on real production apps,” Occhino said. “90% of what we need to do is make changes to an existing code base.”
Lesson 2: The software development lifecycle has already changed, whether enterprises planned for it or not. Domain experts are building software directly instead of writing product requirement documents (PRDs) for engineers to interpret. Product managers and marketers ship features without waiting for engineering sprints.
This shift means enterprises need tools that maintain code visibility and governance while enabling non-engineers to ship. The alternative is creating bottlenecks by forcing all AI-generated code through traditional development workflows.
Lesson 3: Blocking vibe coding tools doesn’t stop vibe coding. It just pushes the activity outside IT’s visibility. Enterprises that try to restrict AI-powered development find employees using tools anyway, creating the shadow IT problem at scale.
The practical implication is that enterprises should focus less on whether to allow vibe coding and more on ensuring it happens within infrastructure that can enforce existing security and deployment policies.
-
Crypto World5 days agoSmart energy pays enters the US market, targeting scalable financial infrastructure
-
Crypto World6 days ago
Software stocks enter bear market on AI disruption fear with ServiceNow plunging 10%
-
Politics5 days agoWhy is the NHS registering babies as ‘theybies’?
-
Crypto World6 days agoAdam Back says Liquid BTC is collateralized after dashboard problem
-
Video2 days agoWhen Money Enters #motivation #mindset #selfimprovement
-
Tech13 hours agoWikipedia volunteers spent years cataloging AI tells. Now there’s a plugin to avoid them.
-
NewsBeat6 days agoDonald Trump Criticises Keir Starmer Over China Discussions
-
Fashion5 days agoWeekend Open Thread – Corporette.com
-
Politics3 days agoSky News Presenter Criticises Lord Mandelson As Greedy And Duplicitous
-
Crypto World4 days agoU.S. government enters partial shutdown, here’s how it impacts bitcoin and ether
-
Sports4 days agoSinner battles Australian Open heat to enter last 16, injured Osaka pulls out
-
Crypto World4 days agoBitcoin Drops Below $80K, But New Buyers are Entering the Market
-
Crypto World2 days agoMarket Analysis: GBP/USD Retreats From Highs As EUR/GBP Enters Holding Pattern
-
Crypto World5 days agoKuCoin CEO on MiCA, Europe entering new era of compliance
-
Business5 days ago
Entergy declares quarterly dividend of $0.64 per share
-
Sports2 days agoShannon Birchard enters Canadian curling history with sixth Scotties title
-
NewsBeat2 days agoUS-brokered Russia-Ukraine talks are resuming this week
-
NewsBeat2 days agoGAME to close all standalone stores in the UK after it enters administration
-
Crypto World22 hours agoRussia’s Largest Bitcoin Miner BitRiver Enters Bankruptcy Proceedings: Report
-
Crypto World6 days agoWhy AI Agents Will Replace DeFi Dashboards
