With the vlog camera ZV-1 II, you can shoot everything from group selfies to sprawling cityscapes. The wide-angle 18 mm1 lens can capture a wider field of view than the human eye, so you can easily fit everyone or everything in the frame. Whether you’re indoors or out, the wide-angle lens turns everyday scenes into dynamic footage.
Versatile framing
The ZV-1 II lets you frame your shots exactly how you want, thanks to its wide-angle zoom lens. 18 mm1 is great for selfies, while 50 mm1 is ideal for portraits and snapshots. What’s more, the maximum apertures of F1.8 with the wide-angle and F4 with the telephoto give a smooth background bokeh, adding all the atmosphere you need.
Creative colours
No experience is needed to make an impressive vlog with all the right visuals from the outset. My Image Style2 lets you simply tweak brightness and colour while viewing what you plan to shoot – all without having to wade through menus. For shots with more creative colour and mood, Creative Look offers ten presets so simply select one that matches your vision.
Please accept Youtube cookies to watch this video
Access your cookie preferences below and make sure to switch on the Youtube cookie under the ‘Functional’ section.Manage cookies
Advertisement
Add a dramatic touch
Just tap the screen to turn everyday moments into cinematic imagery, all without post-production editing. With the Cinematic Vlog Setting3, the camera automatically shoots in a Cinemascope aspect ratio4 at 24 fps5 — the film industry standard. You can also change the colour of your footage simply by selecting from five LOOKs and four MOODs for instant, professional results.
Please accept Youtube cookies to watch this video
Access your cookie preferences below and make sure to switch on the Youtube cookie under the ‘Functional’ section.Manage cookies
Built-in mic for clear voice recording
Advertisement
In [Auto] mode, the new built-in Intelligent 3-Capsule Mic switches6 the direction of the built-in mic from [All Directions] to [Front] when the camera recognises a human face in the frame. Or you can switch manually to [Front] for selfies or [Rear] when shooting with narration. Attach a supplied windscreen to reduce wind noise and capture clearer audio outdoors.
Creative freedom
Compact & easy to use
Effortlessly portable
Small and lightweight, the vlog camera ZV-1 II is designed to be taken out anywhere – just like your smartphone. Weighing just 292 g 7, capture everyday life with ease and grab content on the go.
Advertisement
Simple, intuitive control
Even first-time vloggers can jump straight into shooting with the ZV-1 II; there’s no need to study complex settings. Touch function icons like recording and self-timer are displayed on the screen for easy control. These are also great for shooting selfies or in other situations when you can’t reach the physical buttons on the camera.
Focus on you
The ZV-1 II automatically makes sure you always stay in focus with Touch Tracking. It recognises human faces and eyes, so it keeps track of the main subject without wandering off. Just tap the subject on the screen to activate the feature. Eye AF can even be set to track animal faces and eyes.
Data usage responsibility
The face registration feature involves the use of biometric data. You are responsible for your collection and use of such data, and compliance with applicable local law. For more details, please visit the website below.
Access your cookie preferences below and make sure to switch on the Youtube cookie under the ‘Functional’ section.Manage cookies
Powerful image stabilisation8
Shaky footage can ruin the best of vlogs, but the ZV-1 II is built to let you capture great footage even while you’re walking and shooting handheld. Active Mode8 helps to minimise camera blur, giving you steadier shots without the need for any post-shoot editing.
Advertisement
Stand out with just a tap
The ZV-1 II is designed for easier content creation. Activate the Product Showcase Setting and the camera will automatically focus on the product you’re holding with no additional gestures needed. Or use Bokeh Switch to get a softly blurred background – simply select [Defocus] to make the subject stand out, or [Clear] to keep the entire image in focus.
Optimised for selfies
Selfies made simple
Let the ZV-1 II work for you to capture great group moments. When taking selfies in iAUTO mode with two or three people, the camera automatically adjusts the setting to keep everyone’s faces sharp and clear9.
Always look your best
Look your best any time you’re shooting. Without any special settings, the ZV-1 II accurately captures your skin tone, ensuring a healthy and natural look. Face Priority AE automatically brightens your face so there’s no need to worry about lighting either. There’s also a Soft Skin Effect selectable to OFF/ Low/ Mid /High to adjust your skin smoothness.
Advertisement
Always know when you’re recording
Say goodbye to accidentally missing the action. When you press the MOVIE button or tap the record icon on the LCD screen, the recording lamp glows red and a red frame appears on the screen, letting you know instantly that you’re capturing what counts. The rotatable screen means you can also track recording while shooting from any angle.
Shooting grip10 for easy vlogging
Vlogging is simpler and more comfortable with the optional GP-VPT2BT wireless shooting grip10, which enables you to grab more stable shots and also doubles as a tripod. The zoom lever and recording button are on the grip side, so you don’t have to stretch around the camera every time to reach key control buttons.
Connecting your camera and smartphone has never been easier thanks to the Creators’ App mobile application11. Data transfers continue in the background even if the smartphone display sleeps or a different app is launched12. You can also turn your smartphone or PC into a remote control to operate the camera from a distance.
Simple streaming with a single cable
Transform the ZV-1 II into a high-quality web camera just by connecting a compatible device via USB13. The large 1.0-type image sensor and Creative Look presets ensure vibrant visuals.
Morbi. Consequat rhoncus. Mauris neque hendrerit potenti habitant nostra in cursus phasellus hac cras nostra augue dictumst sociis sociosqu congue risus. Donec, vivamus habitant magna enim Lacus imperdiet nam volutpat nisi pharetra tristique eleifend litora faucibus viverra dis proin tincidunt pretium dolor elementum vel tristique laoreet, non quisque imperdiet nibh feugiat urna Maecenas tempor amet et tortor cursus fusce suspendisse non est turpis sapien molestie interdum quis ornare lectus est. Lobortis. Vestibulum pretium faucibus fringilla dolor. Purus tempor litora pretium a porta. Non nisl dis varius turpis a dui ultricies nam aptent ultricies dictum, curae; eu erat diam rhoncus congue rhoncus nonummy turpis. Sapien dui. Facilisis aliquam sed. Senectus nulla ultrices leo justo vivamus ultrices lacus Facilisis quisque Inceptos senectus tempus torquent at Dis mauris accumsan, euismod aliquet vivamus ut et porttitor in sodales fusce enim enim tellus justo ad ad tortor lectus fusce massa lectus habitasse pulvinar pharetra natoque hendrerit luctus. Netus quam curae; feugiat imperdiet posuere malesuada pellentesque.
Taciti Habitant Metus
Laoreet viverra iaculis etiam eu gravida odio facilisis potenti sollicitudin ad. Sollicitudin eu ornare laoreet risus urna parturient volutpat. Vivamus pharetra blandit bibendum aliquam. Eu. Laoreet habitant justo lacinia adipiscing parturient sociis per cras laoreet. Pretium curae; facilisis tempus, dictum molestie pulvinar eros duis scelerisque facilisi, arcu. Habitant natoque leo mus pellentesque. Vitae lacus curae; interdum ridiculus laoreet platea luctus sagittis sodales Proin tincidunt porttitor mauris augue imperdiet lobortis. Lacinia adipiscing vel. Magnis luctus tristique senectus sociosqu vehicula ullamcorper id habitasse morbi potenti mus primis urna Facilisis fusce phasellus ultrices nec porttitor. Litora, ridiculus, pretium imperdiet senectus libero hac enim nunc primis porta sodales. Dapibus diam arcu aenean class est urna metus viverra Consequat placerat. Justo justo nisl fusce curae;.
Nec
Nec luctus sodales aliquet suscipit magna, lorem maecenas taciti orci velit donec pellentesque cubilia. Dapibus etiam consectetuer. Neque ad potenti luctus nisi ultrices lorem duis eget tempus Est lorem venenatis porta mattis nascetur est pede inceptos tellus lacus eleifend viverra donec ligula. Maecenas. Tristique ad. Nunc ridiculus vivamus mattis ad nibh. Tincidunt auctor venenatis nisl. Semper tincidunt enim ipsum eros montes cubilia.
Advertisement
Condimentum, nam, molestie suspendisse molestie pretium posuere risus. Proin lectus consequat taciti magna lacus. Nullam metus dictum pharetra tristique Semper vulputate. Dui varius luctus, phasellus velit ad nostra volutpat mattis habitant iaculis gravida ad morbi sagittis. Sapien porta arcu lorem. Molestie conubia donec hymenaeos. Sapien cubilia. Lobortis penatibus molestie fermentum Lacus tincidunt massa rhoncus vestibulum ante ridiculus nostra hymenaeos bibendum magnis montes euismod maecenas ut. Tempus auctor ridiculus purus quam consequat viverra parturient neque. Inceptos libero. Lobortis lobortis auctor nibh penatibus auctor Eleifend phasellus in torquent.
Good morning! Let’s play Connections, the NYT’s clever word game that challenges you to group answers in various categories. It can be tough, so read on if you need clues.
What should you do once you’ve finished? Why, play some more word games of course. I’ve also got daily Strands hints and answers and Quordle hints and answers articles if you need help for those too, while Marc’s Wordle today page covers the original viral word game.
SPOILER WARNING: Information about NYT Connections today is below, so don’t read on if you don’t want to know the answers.
NYT Connections today (game #592) – today’s words
(Image credit: New York Times)
Today’s NYT Connections words are…
Advertisement
BETTER
BLANKET
SATCHEL
PAGAN
WIDEN
COOLER
WHIP
SMARTER
BASKET
ECLIPSE
BOMBER
UTENSILS
ТОР
FEDORA
SURPASS
VIXEN
NYT Connections today (game #592) – hint #1 – group hints
What are some clues for today’s NYT Connections groups?
YELLOW: Superior output
GREEN: Alfresco dining
BLUE: As seen in the Temple of Doom
PURPLE: Sounds likeWhite House residents
Need more clues?
We’re firmly in spoiler territory now, but read on if you want to know what the four theme answers are for today’s NYT Connections puzzles…
Sign up for breaking news, reviews, opinion, top tech deals, and more.
NYT Connections today (game #592) – hint #2 – group answers
What are the answers for today’s NYT Connections groups?
YELLOW: OUTDO
GREEN: PICNIC ACCESSORIES
BLUE: PARTS OF AN INDIANA JONES COSTUME
PURPLE: RHYMES OF U.S. PRESIDENT NAMES
Right, the answers are below, so DO NOT SCROLL ANY FURTHER IF YOU DON’T WANT TO SEE THEM.
NYT Connections today (game #592) – the answers
(Image credit: New York Times)
The answers to today’s Connections, game #592, are…
BLUE: PARTS OF AN INDIANA JONES COSTUME BOMBER, FEDORA, SATCHEL, WHIP
PURPLE: RHYMES OF U.S. PRESIDENT NAMES PAGAN, SMARTER, VIXEN, WIDEN
My rating: Hard
My score: 3 mistakes
Oh my gosh I found today’s Connections difficult.
Maybe if the RHYMES OF U.S. PRESIDENT NAMES had included Chump I would have got there, but this wasn’t the only group I was mentally grappling with.
On my third attempt I managed to link BOMBER, FEDORA, SATCHEL, and WHIP, but it wasn’t because I thought they had anything to do with PARTS OF AN INDIANA JONES COSTUME – if I’m honest, I’d forgotten his bag preference.
Cluelessly, I thought they were accessories named after a person, based on the incorrect assumption that Fedora was someone famous in the 1920s. In fact, the history of the Fedora is much more interesting and culminates in a 2016 article that described the fedora hat as the world’s “most-hated fashion accessory”. Yes, this is the same year as a certain red cap rose to prominence.
Yesterday’s NYT Connections answers (Wednesday, 22 January, game #591)
GREEN: RESULTS OF SOME DIGGING DITCH, HOLE, PIT, TRENCH
YELLOW: TYPES OF ACADEMIC COURSES DISCUSSION, LAB, LECTURE, SEMINAR
NYT Connections is one of several increasingly popular word games made by the New York Times. It challenges you to find groups of four items that share something in common, and each group has a different difficulty level: green is easy, yellow a little harder, blue often quite tough and purple usually very difficult.
On the plus side, you don’t technically need to solve the final one, as you’ll be able to answer that one by a process of elimination. What’s more, you can make up to four mistakes, which gives you a little bit of breathing room.
Advertisement
It’s a little more involved than something like Wordle, however, and there are plenty of opportunities for the game to trip you up with tricks. For instance, watch out for homophones and other word games that could disguise the answers.
It’s playable for free via the NYT Games site on desktop or mobile.
Stockholm startup Neko Health has made a big bet on consumers wanting to learn about their state of health and how to prevent things going wrong. Now, investors are making a big bet on Neko.
The startup has raised a fresh $260 million in funding, a Series B that values Neko at $1.8 billion post-money, TechCrunch has learned exclusively.
Neko will be using the capital to break into new markets like the U.S.; continue developing its diagnostics, potentially with acquisitions; and to open more clinics in response to demand. With its waitlist now at over 100,000 people – up from 40,000 just a few months ago – Neko has scanned and evaluated 10,000 patients to date in clinics in Stockholm and its newer market of London.
“It’s very clear that there’s incredible demand for a different way of thinking about health care,” Hjalmar Nilsonne, the CEO and co-founder, said in an interview. He spoke to TechCrunch over a video link from New York, where he is working on laying the groundwork for setting up clinics in the U.S. market.
Advertisement
The U.S. is a priority, he said, because right now it accounts for the most people on its waitlist outside of Europe. “Of course, we want to come to the U.S. We think there’s a lot we could contribute to the ecosystem here, made possible by this funding round,” he added.
Lightspeed Venture Partners, a new investor in the company, is leading this Series B, with General Catalyst, O.G. Venture Partners, Rosello, Lakestar and Atomico participating. The round follows a Series A of $65 million in 2023 from Lakestar, Atomico, General Catalyst and Prima Materia, the investment firm co-founded by Spotify’s Daniel Ek, who happens to be the other co-founder of Neko. Prima Materia also seeded Neko with its initial funding but is not an investor in this latest round.
The funding and Neko’s growth are coming at a time when demands are shifting in the world of healthcare.
Around the world, whether healthcare systems are state-backed or privatized, there’s been a rising focus on preventative healthcare to spot signs before they develop into problems, including to offset the costs of handling chronic and complex conditions in populations that are living longer than before.
Alongside that, there has been a massive injection of technology into the worlds of medicine and health: new devices, new insights, and applications powered by, for example, artificial intelligence are changing how doctors are interacting with patients, what they are able to diagnose, and what patients are looking for in a medical environment.
Advertisement
Not all of these advances are evolving seamlessly — very far from it — but they show few signs of going away, and Neko is playing into all of these changes.
The Neko Health experience involves a visit to a clinic — calm, futuristic, minimalist — where, for £300, a customer gets an hour-long exam based around proprietary hardware and software. That exam generates “millions of health data points,” Neko says.
Moles and other marks on your skin are detected and counted as part of a check for skin cancer; waist circumference, blood pressure, blood sugar, cholesterol and triglyceride levels, heart rate, grip strength and other parameters are measured and used to determine whether you are at risk of metabolic syndrome, stroke, heart attack, diabetes and more. The visit includes a consultation with a doctor and recommendations for follow-ups if needed.
Image Credits:Neko Health
Those follow-ups might come shortly after the initial visit — for example, further monitoring of blood pressure or heart activity — or it might be another full appointment the following year. Nilsonne said that currently 80% of its customers have rebooked and paid in advance for appointments in a year’s time.
Considering Neko is a company that has staked its whole ethos on the power of data and advance planning, it had a fairly random start in life.
Advertisement
It was co-founded back in 2018 after Ek reached out to Nilsonne over Twitter to chat about the state of the healthcare market in response to a tweet of Nilsonne’s. Neither have backgrounds in the field – Nilsson’s previous startup was in climate tech – but through ongoing conversations, early ideas for Neko began to form.
It took six years to bring together a team and work out Neko’s vertically-integrated approach. Even so, Nilsonne said that Neko went into the market hoping for the best but unsure if their idea would resonate; now, according to the company, demand exceeds capacity.
Looking ahead, along with building more clinics to take in more users, Neko is focused on R&D around its medical hardware and software.
It’s starting from a fairly low-tech baseline because of the costs until recently of building and owning medical devices. “The average ECG machine in primary care is 15 years old, meaning the software is 15 years old,” Nilsonne said. “We have a completely different model where we’re vertically integrated, meaning we make these devices, we make the software, and we have the clinic.”
Advertisement
He added that Neko’s aim is to have updates on a yearly cadence, bringing in more parameters to measure, and likely different tiers of service at different price points.
“The body scan today is kind of the iPod moment for Neko,” he said. “The iPod was an iconic product that people loved, and that was exciting. But no one today is using an iPod. It enabled Apple to invest in this incredible paradigm of hand held computational devices. So we very much see this as the beginning of a journey where we’re trying to contribute, you know, incredibly affordable, high quality preventative diagnostics, and every year we’re going to be able to do more and more with less and less.”
The funding round, he said, will “allow us to double down and really increase our investments in making the product better, which is ultimately about solving some of the core problems in health care.”
It will also give Neko a chance to put more space between itself and others looking at preventative healthcare opportunities, such as Zoi in France and Aware in Germany. The capital could also set it apart from efforts from public health services, such as the Health Check provided by the NHS in the U.K., which covers many of the same areas that Neko does.
Advertisement
Some weeks ago, I heard from one of Neko’s early backers that some of the most insistent waitlisters were investors who wanted to check out the company first-hand for the health of their bodies and of their funds.
It seems that getting Lightspeed off the waitlist quickly yielded a strong result. As part of this funding round, Lightspeed partner Bejul Somaia will join Neko’s board.
Samsung has adopted the Content Credentials standard for AI-edited imagery
The Samsung Galaxy S25 range will be the first phones in the world operating the standard
The standard adds a tag and metadata to AI-edited images created on a Galaxy S25 device
Most of the attention at Samsung Galaxy Unpacked was obviously being devoted to the new phones – those in the Samsung Galaxy S25 range – but there was something quite important that slipped under the radar, and that’s the adoption of Content Credentials.
In 2024, the adoption of a standard for marking the creation of imagery and digital content was a hot topic, particularly due to the rise of generative AI and the plague of art theft that ensued to train large language models. Tech companies began adopting their own metadata markers and watermarks to signify AI altering, but a standard for identifying the legitimacy of an image has often been skipped.
One of the front runners for such a standard is Content Credentials, backed by the Content Authenticity Initiative (CAI). The tool is developed by Adobe and the Initiative counts Microsoft, Getty Images and Nvidia as members to name a few.
With this announcement, Samsung has joined the Coalition for Content Provenance and Authenticity (C2PA), which unifies the work of the CAI and its Content Credentials standard with Project Origin, another organization combatting misinformation but anchored in a news ecosystem that can verify the authenticity of content.
“We are excited to share that Samsung will implement #ContentCredentials for AI-generated images on the #GalaxyS25!” the C2PA wrote on LinkedIn. “Samsung has committed to a consequential step in bringing transparency to the digital ecosystem.”
Advertisement
(Image credit: Samsung)
If you suspect that an image has been altered with AI, then you can drop it into a tool built by Adobe to check its authenticity.
Think of Content Credentials as a ledger that contains content information; what device it has been captured on, what program (or AI tool) it has been altered with, even what settings were activated when the original image was created.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
With this standard in tow, AI-generated and AI-altered images produced on Samsung Galaxy S handsets will receive a metadata-based label, basically noting that AI has tampered with what you’re seeing. The ‘CR’ watermark will also be added to the image. While the S25 family is the very first set of phones to carry the metadata marking on images, it follows camera companies Nikon and Leica who have also signed up to the standard.
Advertisement
The standard is, speaking broadly, a win for creatives looking to protect their work, but the obvious problem with any standard is a lack of enthusiasm. If not enough companies producing AI tools adopt standards that allow AI-altered content to be easily flagged, then such a system is worthless.
With more than 4,000 members under the wing of the Content Authenticity Initiative, here’s hoping tools to effectively flag the use of AI keep pace with the increasing capabilities of such tools.
Macbeth, William Shakespeare’s iconic play, is being reimagined as an interactive video game with a neo-noir vibe — and it’s being developed in part by the Royal Shakespeare Company. The game, titled Lili, is a “screen life thriller video game” where you’ll have access to a modern-day Lady Macbeth’s personal devices, according to a press release.
“Players will be immersed in a stylized, neo-noir vision of modern Iran, where surveillance and authoritarianism are part of daily life,” the release says. “The gameplay will feature a blend of live-action cinema within an interactive game format, giving players the chance to immerse themselves in the world of Lady Macbeth and make choices that influence her destiny.” It sounds kind of like a version of Macbeth inspired by Sam Barlow’s interactivethrillers.
The Royal Shakespeare Company is making the game in collaboration with iNK Stories, a New York-based indie studio and publisher that also made 1979 Revolution: Black Friday. It stars Zar Amir as “Lady Macbeth (Lili),” per the press release.
A software engineer has bought the website “OGOpenAI.com” and redirected it to DeepSeek, a Chinese AI lab that’s been making waves in the open source AI world lately.
Software engineer Ananay Arora tells TechCrunch that he bought the domain name for “less than a Chipotle meal,” and that he plans to sell it for more.
The move was an apparent nod to how DeepSeek releases cutting-edge open AI models, just as OpenAI did in its early years. DeepSeek’s models can be used offline and for free by any developer with the necessary hardware, similar to older OpenAI models like Point-E and Jukebox.
DeepSeek caught the attention of AI enthusiasts last week when it released an open version of its DeepSeek-R1 model, which the company claims performs better than OpenAI’s o1 on certain benchmarks. Outside of models such as Whisper, OpenAI rarely releases its flagship AI in an “open” format these days, drawing criticism from some in the AI industry. In fact, OpenAI’s reticence to release its most powerful models is cited in a lawsuit from Elon Musk, who claims that the startup isn’t staying true to its original nonprofit mission.
Advertisement
Arora says he was inspired by a now-deleted post on X from Perplexity’s CEO, Aravind Srinivas, comparing DeepSeek to OpenAI in its more “open” days. “I thought, hey, it would be cool to have [the] domain go to DeepSeek for fun,” Arora told TechCrunch via DM.
Anthropic CEO has promised several updates to Claude, including one that gives it memory.
This means the AI can remember things and bring it up in future conversations.
It remains to be seen how this performs and is slated to arrive in the coming months.
The Claude AI chatbot will receive major upgrades in the months ahead, including the ability to listen and respond by voice alone. Anthropic CEO Dario Amodei explained the plans to the Wall Street Journal at the World Economic Forum in Davos, including the voice mode and an upcoming memory feature.
Essentially, Claude is about to get a personality boost, allowing it to talk back and remember who you are. The two-way voice mode promises to let users speak to Claude and hear it respond, creating a more natural, hands-free conversation. Whether this makes Claude a more accessible version of itself or will let it mimic a human on the phone is questionable, though.
Either way, Anthropic seems to be aiming for a hybrid between a traditional chatbot and voice assistants like Alexa or Siri, though presumably with all the benefits of its more advanced AI.
Claude’s upcoming memory feature will allow the chatbot to recall past interactions. For example, you could share your favorite book, and Claude will remember it the next time you chat. You could even discuss your passion for knitting sweaters and Claude will pick up the thread in your next conversation. While this memory function could lead to more personalized exchanges, it also raises questions about what happens when Claude mixes those memories with an occasional hallucination.
Claude demand
Still, there’s no lack of interest in what Claude can do. Amodei mentioned that Anthropic has been overwhelmed by the surge in demand for AI over the past year. Amodei explained that the company’s compute capacity has been stretched to its limits in recent months.
Advertisement
Anthropic’s push for Claude’s upgrades is part of its effort to stay competitive in a market dominated by OpenAI and tech giants like Google. With OpenAI and Google integrating ChatGPT and Gemini into everything they can think of, Anthropic needs to find a way to stand out. By adding voice and memory to Claude’s repertoire, Anthropic hopes to stand out as an alternative that might lure away fans of ChatGPT and Gemini.
A voice-enabled, memory-enhanced AI chatbot like Claude may also serve as a leader, or at least a competitor, among the trend of making AI chatbots seem more human. The aim seems to be to blur the line between a tool and a companion. And if you want people to use Claude to that extent, a voice and a memory are going to be essential.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
Google has agreed to acquire a part of HTC’s extended reality (XR) business for $250 million, expanding its push into virtual and augmented reality hardware following the recent launch of its Android XR platform.
The deal involves transferring some of the HTC VIVE engineering staff to Google and granting non-exclusive intellectual property rights, according to the Taiwanese firm. HTC will retain rights to use and develop the technology.
Google said the acquisition will accelerate Android XR platform development across headsets and glasses. The move comes as tech giants race to establish dominance in XR technology, with Apple and Meta maintaining their lead in the virtual reality market.
Advertisement
Google and HTC will explore additional collaboration opportunities, they said.
It’s an ultra-wide zoom lens designed for full-frame cameras like the Canon EOS R8
Practically identical design to the RF 28-70mm F2.8 IS STM
A £1,249 list price – we’ll confirm US and Australia pricing asap
Canon has unveiled its latest ultra-wide angle zoom lens for it’s full-frame mirrorless cameras, the RF 16-28mm F2.8 IS STM, and I got a proper feel for it during a hands-on session hosted by Canon ahead of its launch.
It features a bright maximum F2.8 aperture across its entire 16-28mm range, and is a much more compact and affordable option for enthusiasts than Canon’s pro RF 15-35mm F2.8L IS USM lens. Consider the 16-28mm a sensible match for Canon’s beginner and mid-range full-frame cameras instead, such as the EOS R8.
Design-wise, the 16-28mm is a perfect match with the RF 28-70mm F2.8 IS STM lens – the pair share the same control layout and are almost identical in size, even if the 28-70mm lens is around 10 percent heavier.
The new lens is seemingly part of a move by Canon to deliver more accessible fast aperture zooms that fit better with Canon’s smaller mirrorless bodies – the 16-28mm weighs just 15.7oz / 445g and costs £1,249 – that’s much less than the comparable pro L-series lens.
Image 1 of 3
Advertisement
Alongside the RF 28-70mm F2.8 lens – the two lenses are clearly designed to pair up.(Image credit: Tim Coleman)
Attached to the EOS R8(Image credit: Tim Coleman)
The maximum F2.8 aperture is available whatever focal length you set the lens to. (Image credit: Tim Coleman)
The right fit for enthusiasts
Despite its lower price tag, the 16-28mm still feels reassuringly solid – the rugged lens is made in Japan and features a secure metal lens mount. You get a customizable control ring, autofocus / manual focus switch plus an optical stabilizer switch, and that’s the extent of the external controls.
When paired with a Canon camera that features in-body image stabilization, such as the EOS R6 Mark II, you get up to 8 stops of stabilization, although the cheaper EOS R8 isn’t blessed with that feature, and for which the lens offers 5.5 stops of stabilization alone.
I tested the 16-28mm lens with an EOS R8 and the pair is a perfect match, as is the EOS R6 Mark II which is only a little bit bigger.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
Image 1 of 5
Advertisement
It’s official – the 16-28mm is made in Japan.(Image credit: Tim Coleman)
The physical controls include a control ring, zoom ring, AF / MF switch plus optical stabilizer switch. (Image credit: Tim Coleman)
The lens packs away smaller with the zoom ring rotated to the off position(Image credit: Tim Coleman)
At 16mm the lens is physically at its longest.(Image credit: Tim Coleman)
Zoom to 28mm and the lens barrel retracts a little.(Image credit: Tim Coleman)
I didn’t get too many opportunities to take pictures with the new lens during my brief hands-on, but I have taken enough sample images captured in raw and JPEG format to get a good enough idea of the lens’ optical qualities and deficiencies.
For example, at the extreme wide angle 16mm setting and with the lens aperture wide open at F2.8, raw files demonstrate severe curvilinear distortion and vignetting. Look at the corresponding JPEG, which was captured simultaneously, and you can see just how much lens correction is being applied to get you clean JPEGs out of the camera (check out the gallery of sample images below).
Image 1 of 10
An unprocessed raw file with the lens set to 16mm and F2.8. You can see severe vignetting in the corners and barrel distortion(Image credit: Tim Coleman)
That exact same photo but the processed JPEG version. See how much the camera has done to correct all those distortions.(Image credit: Tim Coleman)
Here I’m shooting a selfie at 28mm and F2.8. Barrel distortion is less obvious, although light fall off is. (Image credit: Tim Coleman)
And here’s the same photo but the processed JPEG. The detail in sharply focused areas; my eyes, stubble and clothing, is super sharp (Image credit: Tim Coleman)
Again, another uncorrected raw file with the lens set to 16mm and F2.8(Image credit: Tim Coleman)
And here’s the processed JPEG. (Image credit: Tim Coleman)
28mm F2.8, unedited raw file. (Image credit: Tim Coleman)
Once again, the JPEG version of the image with 28mm F2.8 lens settings. Much cleaner.(Image credit: Tim Coleman)
The detail in this JPEG image, shot at 16mm, is super sharp everywhere in the frame. (Image credit: Tim Coleman)
This image was taken with the lens set to 28mm and the aperture to f/8. Optically this is the optimum settings for the lens and overall the image quality majorly impresses. (Image credit: Tim Coleman)
Those lens distortions really are quite severe, but when you look at the JPEG output, all is forgiven – even with such heavy processing taking place to correct curvilinear distortion and vignetting, detail is consistently sharp from the center to the very edges and corners of the frame, while light fall off in the corners is mostly dealt with.
I’ll go out on a limb and suggest the target audience for this lens will be less concerned with these lens distortions, so long as it’s possible to get the end results you like, and my first impressions are that you can certainly do that – I’ve grabbed some sharp selfies and urban landscapes, with decent control over depth of field, plus enjoyed the extra wide perspective that makes vlogging a whole lot easier.
(Image credit: Tim Coleman)
A worthy addition to the Canon RF-mount family?
I expect most photographers and filmmakers will mostly use the 16-28mm lens’ extreme ends of its zoom range; 16mm and 28mm. The former is particularly handy for video work thanks to its ultra wide perspective, while it’s a versatile range for landscape and architecture photography.
Advertisement
That zoom range is hardly extensive, however, and I’m not sure if it’s a lens that particularly excites me, even if it does make a sensible pairing with the RF 28-70mm F2.8 for enthusiasts.
It is much cheaper than a comparable L-series lens, but I’d hardly call a £1,249 lens cheap. Also, why not just pick up the RF 16mm F2.8 STM and the RF 28mm F2.8 prime lenses instead? These are Canon’s smallest lenses for full-frame cameras and the pair combined costs half the price of the 16-28mm F2.8.
As capable as the 16-28mm appears to be on my first impressions – it’s a super sharp lens with versatile maximum aperture – I’m simply not convinced how much extra it brings to the RF-mount table, and if there’s enough of a case for it for most people.
Scale AI is facing its third lawsuit over alleged labor practices in just over a month, this time from workers claiming they suffered psychological trauma from reviewing disturbing content without adequate safeguards.
Scale, which was valued at $13.8 billion last year, relies on workers it categorizes as contractors to do tasks like rating AI model responses.
Earlier this month, a former worker sued alleging she was effectively paid below the minimum wage and misclassified as a contractor. A complaint alleging similar issues was also filed in December 2024.
This latest complaint, filed January 17 in the Northern District of California, is a class action complaint that focuses on the psychological harms allegedly suffered by six people who worked on Scale’s platform Outlier.
Advertisement
The plaintiffs claim they were forced to write disturbing prompts about violence and abuse — including child abuse — without proper psychological support, suffering retaliation when they sought mental health counsel. They say they were misled about the job’s nature during hiring and ended up with mental health issues like PTSD due to their work. They are seeking the creation of a medical monitoring program along with new safety standards, plus unspecified damages and attorney fees.
One of the plaintiffs, Steve McKinney, is the lead plaintiff in that separate December 2024 complaint against Scale. The same law firm, Clarkson Law Firm of Malibu, California, is representing plaintiffs in both complaints.
Clarkson Law Firm previously filed a class action suit against OpenAI and Microsoft over allegedly using stolen data — a suit that was dismissed after being criticized by a district judge for its length and content. Referencing that case, Joe Osborne, a spokesperson for Scale AI, criticized Clarkson Law Firm and said Scale plans “to defend ourselves vigorously.”
“Clarkson Law Firm has previously — and unsuccessfully — gone after innovative tech companies with legal claims that were summarily dismissed in court. A federal court judge found that one of their previous complaints was ‘needlessly long’ and contained ‘largely irrelevant, distracting, or redundant information,’” Osborne told TechCrunch.
Advertisement
Osborne said that Scale complies with all laws and regulations and has “numerous safeguards in place” to protect its contributors like the ability to opt-out at any time, advanced notice of sensitive content, and access to health and wellness programs. Osborne added that Scale does not take on projects that may include child sexual abuse material.
In response, Glenn Danas, partner at Clarkson Law Firm, told TechCrunch that Scale AI has been “forcing workers to view gruesome and violent content to train these AI models” and has failed to ensure a safe workplace.
“We must hold these big tech companies like Scale AI accountable or workers will continue to be exploited to train this unregulated technology for profit,” Danas said.
A new ‘Circle to Search’ trick is available on Samsung’s latest Galaxy phones
It lets you sing or hum songs that you want to identify
The feature worked well in our early hands-on demos
Sure, Shazam and the Google Assistant, or even Gemini, can help you identify a song that’s playing in a coffee shop or while you’re out and about. But what about that tune you have stuck in your head that you’re desperate to put a name to?
Suffice it to say, that’s not a problem I have for anything by Springsteen, but it does happen for other songs, and Samsung’s latest and greatest – the Galaxy S25, S25 Plus, and S25 Ultra – might just be able to cure this. It’s courtesy of the latest expansion of Google’s Circle to Search on devices.
Launched on the Galaxy S24 last year and then expanded to other devices like Google’s own family of Pixel phones, you can long press at the bottom and then circle something on the screen to figure out what it is or find out more.
For instance, it could be a fun hat within a TikTok or Instagram Reel video, a snazzy button down, or even more info on a concert happening or a location like San Jose – where Samsung’s Galaxy Unpacked took place.
Circle to Search for songs
(Image credit: Future/Jacob Krol)
Now, though, when you long-press the home button – or engage the assistant in another way – you’ll see a music note icon.
Advertisement
From there, you can just start singing as Google will tell you it is listening. I as well as my colleague, TechRadar’s Editor-at-Large Lance UIanoff, then hummed two tracks – “Hot To Go” by Chapell Roan, which the Galaxy S25 Ultra took tries to identify it properly – and then it got “Fly Me To The Moon” (a classic) on its first try.
While Lance did have to hum a good bit, it did in fact figure out what that song inside our head was, and this could make the latest facet of Circle to Search a pretty handy function. It will, of course, also do the job of Shazam and listen to whatever is playing when you select it via the microphone built into your device as well.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
Further, you can use it to circle a video on screen and figure out what was playing – as you can see in the hands-on embed below, it was able to do this for a TikTok. That ultimately doesn’t seem quite as helpful given a video on TikTok – or an Instagram Reel – will note the audio it is using. But this could be particularly useful for a long YouTube video that uses a variety of background music or if you’re streaming a title and can’t figure out the song.
Advertisement
(Image credit: Future)
Google’s latest tool expansion for Circle to Search will be available from day one on the Galaxy S25, Galaxy S25 Plus, and Galaxy S25 Ultra, but it’s worth pointing out that the search giant – turned AI giant – has been teasing this feature for a bit, and some even found it hiding in existing code. After our demo of it on the S25 Ultra, we had a hunch it would arrive elsewhere and it should be arriving on other devices with Circle to Search.
As for when it will arrive on the Galaxy S24, Z Flip 6, or Z Fold 6, that remains to be seen, and we’re also wondering that same question for Samsung’s other new Galaxy AI features. And if you’re keen to learn more about the Galaxy S25 family, check out our hands-on and our Galaxy S25 live blog for the event.
Samsung is adding an entirely new way to ‘Circle to Search’ and it could help you figure out that song that is stuck in your head♬ original sound – TechRadar
You must be logged in to post a comment Login