Anyone who’s scrolled social media lately knows that AI is everywhere. But we aren’t always great at spotting it when we see it. That’s a big problem, and our frustrations with AI are growing.
AI slop has infected every platform, from soulless images to bizarre videos and superficially literate text. The vast majority of US adults who use social media (94%) believe they encounter content that was created or altered by AI, but only 44% of US adults say they’re confident they can tell real photos and videos from AI-generated ones, according to an exclusive CNET survey.
There are a lot of different ways people are fighting back against AI content. Some solutions are focused on better labels for AI-created content, since it’s harder than ever to trust our eyes. Of the 2,443 respondents who use social media, half (51%) believed we need better AI labels online. Others (21%) believe there should be a total ban on AI-generated content on social media. Only a small group (11%) of respondents say they find AI content useful, informative or entertaining.
AI isn’t going anywhere, and it’s fundamentally reshaping the internet and our relationship with it. Our survey shows that we still have a long way to go to reckon with it.
Key findings
Most US adults who use social media (94%) believe they encounter AI content on social media, yet far fewer (44%) can confidently distinguish between real and fake images and videos.
Many US adults (72%) said they take action to determine if an image or video is real, but some don’t do anything, particularly among Boomers (36%) and Gen Xers (29%).
Half of US adults (51%) believe AI-generated and edited content needs better labeling.
One in five (21%) believe AI content should be prohibited on social media, with no exceptions.
Watch this: AI Is Indistinguishable From Reality. How Do We Spot Fake Videos?
US adults don’t feel they can spot AI media
Seeing is no longer believing in the age of AI. Tools like OpenAI’s Sora video generator and Google’s Nano Banana image model can create hyperrealistic media, with chatbots smoothly assembling swaths of text that sound like a real person wrote them.
So it’s understandable that a quarter (25%) of US adults say they aren’t confident in their ability to distinguish real images and videos from AI-generated ones. Older generations, including Boomers (40%) and Gen X (28%), are the least confident. If folks don’t have a ton of knowledge or exposure to AI, they’re likely to feel unsure about their ability to accurately spot AI.
Advertisement
People take action to verify content in different ways
AI’s ability to mimic real life makes it even more important to verify what we’re seeing online. Nearly three in four US adults (72%) said they take some form of action to determine whether an image or video is real when it piques their suspicions, with Gen Z being the most likely (84%) of the age groups to do so. The most obvious — and popular — method is closely inspecting the images and videos for visual cues or artifacts. Over half of US adults (60%) do this.
But AI innovation is a double-edged sword; models have improved rapidly, eliminating the previous errors we used to rely on to spot AI-generated content. The em dash was never a reliable sign of AI, but extra fingers in images and continuity errors in videos were once prominent red flags. Newer AI models usually don’t make those pedestrian mistakes. So we all have to work a little bit harder to determine what’s real and what’s fake.
You can look for discrepancies and labels to identify AI content.
Advertisement
Cole Kan/CNET/Getty Images
As visual indicators of AI disappear, other forms of verifying content are increasingly important. The next two most common methods are checking for labels or disclosures (30%) and searching for the content elsewhere online (25%), such as on news sites or through reverse image searches. Only 5% of respondents reported using a deepfake detection tool or website.
But 25% of US adults don’t do anything to determine if the content they’re seeing online is real. That lack of action is highest among Boomers (36%) and those in Gen X (29%). This is worrisome — we’ve already seen that AI is an effective tool for abuse and fraud. Understanding the origins of a post or piece of content is an important first step to navigating the internet, where anything could be falsified.
Half of US adults want better AI labels
Many people are working on solutions to deal with the onslaught of AI slop. Labeling is a major area of opportunity. Labeling relies on social media users to disclose that their post was made with the help of AI. This can also be done behind the scenes by social media platforms, but it’s somewhat difficult, which leads to haphazard results. That’s likely why 51% of US adults believe that we need better labeling on AI content, including deepfakes. Support was strongest among Millennials and Gen Z, at 56% and 55%, respectively.
Advertisement
Very few (11%) found AI content useful, informative or entertaining.
Cole Kan/CNET/Getty Images
Other solutions aim to control the flood of AI content shared on social media. All of the major platforms allow AI-generated content, as long as it doesn’t violate their general content guidelines — nothing illegal or abusive, for example. But some platforms have introduced tools to limit the amount of AI-generated content you see in your feeds; Pinterest rolled out its filters last year, while TikTok is still testing some of its own. The idea is to give every person the ability to permit or exclude AI-generated content from their feeds.
But 21% of respondents believe that AI content should be prohibited on social media altogether, no exceptions allowed. That number is highest among Gen Z at 25%. When asked if they believed AI content should be allowed but strictly regulated, 36% said yes. Those low percentages may be explained by the fact that only 11% find AI content provides meaningful value — that it’s entertaining, informative or useful — and that 28% say it provides little to no value.
How to limit AI content and spot potential deepfakes
Your best defense against being fooled by AI is to be eagle-eyed and trust your gut. If something is too weird, too shiny or too good to be true, it probably is. But there are other steps you can take, like using a deepfake detection tool. There are many options; I recommend starting with the Content Authenticity Initiative‘s tool, since it works with several different file types.
Advertisement
You can also check out the account that shared the post for red flags. Many times, AI slop is shared by mass slop producers, and you’ll easily be able to see that in their feeds. They’ll be full of weird videos that don’t seem to have any continuity or similarities between them. You can also check to see if anyone you know is following them or if that account isn’t following anyone else (that’s a red flag). Spam posts or scammy links are also indications that the account isn’t legit.
If you want to limit the AI content you see in your social feeds, check out our guides for turning off or muting Meta AI in Instagram and Facebook and filtering out AI posts on Pinterest. If you do encounter slop, you can mark the post as something you’re not interested in, which should indicate to the algorithm that you don’t want to see more like it. Outside of social media, you can disable Apple Intelligence, the AI in Pixel and Galaxy phones and Gemini in Google Search, Gmail and Docs.
Even if you do all this and still get occasionally fooled by AI, don’t feel too bad about it. There’s only so much we can do as individuals to fight the gushing tide of AI slop. We’re all likely to get it wrong sometimes. Until we have a universal system to effectively detect AI, we have to rely on the tools we have and our ability to educate each other on what we can do now.
Methodology
CNET commissioned YouGov Plc to conduct the survey. All figures, unless otherwise stated, are from YouGov Plc. The total sample size was 2,530 adults, of which 2,443 use social media. Fieldwork was undertaken Feb. 3 to 5, 2026. The survey was carried out online. The figures have been weighted and are representative of all US adults (aged 18 plus).
IBM shares plunged nearly 13% on Monday after Anthropic published a blog post arguing that its Claude Code tool could automate much of the complex analysis work involved in modernizing COBOL, the decades-old programming language that still underpins an estimated 95% of ATM transactions in the United States and runs on the kind of mainframe systems IBM has sold for generations.
Anthropic said the shrinking pool of developers who understand COBOL had long made modernization cost-prohibitive, and that AI could now flip that equation by mapping dependencies and documenting workflows across thousands of lines of legacy code. The sell-off deepened a rough 2026 for IBM, whose shares are now down more than 22% year to date.
Google has finally released Android 17 Beta 1 after wrapping up its previous testing phase. The release is live as of February 13, 2026. There are several performance improvements, along with updates to foldables and tablets. Additionally, there are a number of visual changes to the Pixel Launcher user interface. With this release, Google is emphasizing long-term development to pave the way for the next Android release.
One of the biggest highlights of Android 17 is Google’s stronger push toward large-screen optimization. Developers are now required to properly adapt their apps for foldables, tablets, and desktop-style modes. Orientation changes and resizable window support are no longer optional. This should lead to a smoother, more consistent experience on larger devices without layout issues.
Furthermore, Android 17 Beta 1 brings a slimmer home screen search bar with a cleaner look. The shortcut now sits inside the bar and can be customized with options like Gemini Live, Translate, or Song Search. Users can also remove the At a Glance widget. Minor tweaks include a refreshed brightness icon and clearer access to the volume panel.
Performance Improvements in Android 17
Google has rolled out various enhancements to the system to make the devices more efficient. The most exciting change includes the introduction of the generational garbage collection system. The update reduces CPU load by optimizing memory cleanup across various stages. This minimizes CPU load.
Moreover, Android 17 improves app memory management to ensure better utilization of system resources. The update also includes notification-related optimizations that lower memory consumption. Although these upgrades run quietly in the background, they improve speed, stability, and overall device performance.
Advertisement
Media and Camera Enhancements
The update introduces advanced tools for media and camera apps. Users will notice smoother transitions between camera modes and generally more solid performance. Google is also working on providing users with a more unified listening experience across different applications and devices.
How to install Android 17 beta 1 on your Google Pixel?
On your phone, go to Settings > System > Software updates.
Check for updates and install Android 17 Beta 1.
As with any beta software, users should expect occasional bugs and instability. It’s best suited for developers or enthusiasts comfortable with early builds rather than primary daily drivers.
Walk into any school and you will find teachers using classroom technology in very different ways. One teacher builds interactive lessons with embedded videos and real-time polls. Down the hall, another uses technology more selectively, focusing on core features that support daily instruction. Both are effective educators. Both deserve classroom technology that works for them — and their students.
The challenge isn’t that teachers need to change how they work; it’s that most classroom technology is designed with only one pathway in mind. When tools offer multiple entry points instead, they can meet teachers where they are while supporting a wide range of student needs.
Recently, EdSurge spoke with three educators who use ViewSonic’s interactive display technology in distinctly different ways: Rebecca Ganger, technology coach and Chromebook coordinator, who also teaches high school students to repair devices and sponsors her district’s middle school Technology Club; Elena Clemente, technology trainer with 29 years of teaching experience in early elementary grades; and Brendan Powell, elementary STEM teacher. Their experiences illustrate what becomes possible when technology adapts to people rather than demanding that people adapt to it.
EdSurge: Why is it important that classroom technology offers multiple ways to engage?
Powell: Students need an engaging system to help them improve their understanding, and it makes learning more fun. Interactive technology helps a lot with coding, so my students can work through problems with me and are more engaged when they actually get to do the examples. Giving students choices helps them understand different concepts and piques their interest.
Advertisement
Clemente: Students learn in different ways, and teachers bring different approaches to their classrooms. While some students may prefer the interactive tools already displayed, others might prefer to choose which tool to use to demonstrate how to solve a math problem. The same goes for teachers. Some may prefer to use ready-made slides, while others prefer to create on the canvas. By offering choices, we allow both students and teachers to use technology in ways that make learning engaging.
Image Credit: ViewSonic
What makes technology feel approachable rather than intimidating for teachers at different comfort levels?
Clemente: As I have led several professional development sessions for teachers, I know that some want only the basics, such as writing on the canvas or projecting slides. Others have created engaging lessons that bring learning to life. All teachers are able to learn more.
I have found that it is best to demonstrate how to use a tool on the interactive panel, have teachers practice and then discuss how they can use it in their lessons. When teachers take that learning back to their classrooms and apply it in a lesson, the tool feels more approachable.
Ganger: Often, new technology requires you to learn so many things just to be able to use the basics and get started. Being able to use parts of the software and then incorporate more as you become familiar and comfortable is a huge plus. You can start with just a little bit of instruction and then learn more to incorporate additional tools into your lessons as you’re ready. You can use it at your comfort level, and it is also very user-friendly for student participation at the board.
What changes occur when students interact directly with classroom displays?
Advertisement
Powell: When students use the display in my classroom, they are more willing to talk to each other about the process and explain their ideas more clearly.
Ganger: They become more focused on the activity and are excited to participate. Students are so accustomed to auditory and visual sources being their primary ways of obtaining information. Having the opportunity to interact with technology fits into their natural way of learning.
Clemente: One of the big changes I have seen, or rather heard, is the amount of conversation that takes place. Students are able to express their thinking out loud while building speaking and listening skills. Students take pride in being able to share and navigate the interactive panel.
How do you keep students actively involved during interactive lessons?
Ganger: I personally enjoy adding a variety of interactive tools. I incorporate sounds, videos and links to other sites all within my presentation. I also enjoy using game boards with subject-specific questions as review activities. Varying the activities keeps things fresh and interesting for students.
Advertisement
Clemente: One way I keep students actively involved is by having them use their [individual] whiteboards to participate while I am projecting. Students know that they are accountable and that I am looking to call on them to share good examples and demonstrate their learning. I also use partner talks so that students can share what they are learning and gain different perspectives. Students love being called up to engage with the interactive panel, so I call them up in groups. They line up and take turns, or sometimes they work as a team and collaboratively solve the problem.
When it works well, how does technology change your teaching?
Clemente: When technology works well, it makes my job as a classroom teacher easier. I am able to easily share material, provide visually appealing interactive slides and engage with my students using hands-on learning activities that build their technical skills. As a technology trainer, I use technology to demonstrate how teaching can come to life, creating engaging lessons that have a positive impact on student learning.
Ganger: It frees up time typically spent lecturing in front of the room, allowing more one-on-one interaction with students. It provides immediate feedback and allows for easy differentiation of material. Being able to reach all types of learning styles with interactive boards and software is a game-changer.
Advertisement
Powell: The technology that works well in my room has changed how my students access information and made learning more flexible for all of them. One thing I like to say in my room is that technology can help us learn new skills and ways of thinking that will benefit us in the long run. Technology is always evolving, so it helps to have my students involved with me as I’m learning as well.
OpenAI is broadening how it helps large organizations put artificial intelligence into real use. The company announced a new initiative, Frontier Alliances, teaming up with four major consulting firms, Boston Consulting Group (BCG), McKinsey & Company, Accenture, and Capgemini, to help enterprises move beyond pilot AI projects and embed intelligent systems deeply into business workflows.
The announcement, published on OpenAI’s own website, lays out the reasoning behind the push: having powerful AI models isn’t the main bottleneck anymore.
Instead, companies need help designing the strategy, integrating the technology across systems and data, redesigning workflows, and managing organizational change so that AI can actually deliver value at scale.
Central to this effort is Frontier, OpenAI’s enterprise platform for building, deploying, and managing AI agents, systems that act like “AI coworkers,” performing tasks across software tools, extracting context from business data, and handling workflows end-to-end.
These agents are meant to go beyond simple chat or isolated automation, helping with customer support, sales processes, software development tasks, and more.
Advertisement
In its official press release, OpenAI described several key points about the Frontier Alliances:
The program pairs OpenAI’s Forward Deployed Engineering (FDE) teams with consultants from BCG, McKinsey, Accenture, and Capgemini to help enterprise customers adopt AI reliably and at scale.
Each consulting partner will build dedicated practice groups certified on OpenAI technology, combining technical expertise with deep industry and transformation experience.
The alliances cover both strategy and operational execution; from planning AI adoption to integrating Frontier with core systems and training internal teams.
Leaders from each consulting firm feature prominently in the announcement, stressing that teams need more than just tools, they need governance, change management, and end-to-end support to embed AI into daily operations.
This marks a clear strategic shift for OpenAI. Earlier this year, the company introduced Frontier as a platform designed to give AI agents shared context and capabilities that go beyond isolated demos or narrow use cases.
But real world deployments require more than technology alone. Large enterprises often struggle with data silos, outdated systems, and the internal alignment needed to scale new technology.
Advertisement
The Frontier Alliances are meant to bridge that gap.
Reuters notes that this move brings OpenAI closer to traditional enterprise software players and differentiates its enterprise offering from simple model licensing by leaning into operational support and integration.
The consulting partners bring decades of experience in transformation and change management, helping customers make AI part of the everyday workflow rather than a one-off experiment.
OpenAI’s approach reflects broader industry trends. Enterprises have spent recent years experimenting with generative AI tools, but many have yet to turn early pilots into sustained production use.
Advertisement
By combining Frontier’s agent platform with consultancy know-how, OpenAI hopes to accelerate adoption and deliver measurable business impact more quickly.
Competition in enterprise AI services remains intense.
Companies like Anthropic, Microsoft, and Google are also targeting corporate customers with their own AI platforms and partnerships.
For OpenAI, the Frontier Alliances are a way to leverage trusted business networks and implementation experience, giving its platform a stronger path into large-scale deployment.
AI is revolutionizing the cybersecurity landscape. From accelerating threat detection to enabling real-time automated responses, artificial intelligence is reshaping how organizations defend against increasingly sophisticated attacks.But with these advancements come new and complex risks—AI systems themselves can be exploited, manipulated, or biased, creating fresh vulnerabilities.
In this session, we’ll explore how AI is being applied in real-world cybersecurity scenarios—from anomaly detection and behavioral analytics to predictive threat modeling. We’ll also confront the challenges that come with it, including adversarial AI, data bias, and the ethical dilemmas of autonomous decision-making.
Looking ahead, we’ll examine the future of intelligent cyber defense and what it takes to stay ahead of evolving threats. Join us to learn how to harness AI responsibly and effectively—balancing innovation with security, and automation with accountability.
Temporal co-founders Samar Abbas and Maxim Fateev have been tackling the same distributed systems problem since their days at Amazon, Microsoft, and Uber. But the AI boom has put the problem “on steroids” as agents move to production, according to Abbas — and investors have taken notice.
Temporal last week announced a $300 million Series D round led by Andreessen Horowitz, pushing its valuation to $5 billion — up from $2.5 billion in October.
Temporal’s revenue increased more than 380% year-over-year, reflecting demand for infrastructure services from companies using AI agents that are taking on more responsibilities.
“There is a massive platform shift happening,” Abbas told GeekWire. “And there is a whole layer of infrastructure being developed right now.”
Temporal’s pitch is something it calls “durable execution,” a new category Abbas says is about giving developers a simpler programming model for long-running, distributed workflows. Instead of wiring together queues, databases, retry mechanisms, and timers to handle failures, engineers write their logic as normal code and Temporal makes it durable behind the scenes.
Advertisement
Abbas and Fateev launched Temporal in 2019, after they helped build an open-source orchestration engine called Cadence during their time at Uber. The tool was used by companies including HashiCorp, LinkedIn, Airbnb, Coinbase, and others.
“Both of us have been obsessed about this problem space,” Abbas said, describing Temporal as “literally the fourth or fifth time we are building a similar system.”
During the cloud era, Abbas said, Temporal became a “reliability backbone” for developers building mission-critical applications. Now, as AI models get smarter and agents hit production, the company is seeing huge scale.
“We are kind of becoming the core piece of infrastructure which is powering the AI agentic wave,” Abbas said.
Advertisement
Temporal’s customer base ranges from OpenAI, which uses the platform for image generation, to Replit, which uses Temporal to orchestrate coding agents over extended sessions.
“As long-running agents become a primary driver of enterprise value, the execution layer beneath them becomes indispensable,” investors with Andreessen Horowitz wrote in a blog post. “Temporal wasn’t built in reaction to generative AI; it was built to make complex systems durable. But the agentic era has made that need undeniable.”
Asked about a potential AI bubble and broader hype, Abbas pointed to customers like Abridge in healthcare, where doctors can focus on patients instead of note-taking. He also noted transformation across legal workflows, coding agents, customer support, and research.
“There is real value being delivered to real users,” he said.
Advertisement
He envisions a future where “every human on the planet can be called a software developer” and the cost of building software keeps falling, driving demand for a reliable execution backbone.
Temporal is built as a remote-first company, with around 375 employees and 62 of them in the Seattle area. Abbas and Fateev have been based in the region for decades, and many early employees are here as well.
Abbas, who was previously CTO (he swapped roles with Fateev in 2024) said the software infrastructure expertise in Seattle is a good match for trends that Temporal is riding. “Seattle has the right ingredients of talent,” he said. “We’ll be doubling down and growing in the Seattle area.”
As for advice to other founders riding the AI wave, Abbas said it’s about getting clarity on how you deliver value and avoiding all other distractions. “Just know who your users are — are they able to drive value from the product you are building?” He said Temporal is laser-focused on that strategy — and it seems to be working.
According to AdWeek, the price for a 30-second commercial during Super Bowl LX has soared to $8 million, after NBC opened in the summer by offering spots for $7 million. As AdWeek notes, “due to demand, the company has already reached its cap for the number of spots that were available for advertisers to buy during the upfront season.”
$8 million for 30 seconds sometimes means turning a niche product into a national phenomena. The 30 seconds purchased by Ring went the other way. If you want to see how $8 million can be used to promote mass surveillance enabled by consumer products, here you go:
Sure, it looks pretty innocuous. And what could be better than turning Ring and Flock Safety’s network of cameras into a digital proxy for posting “LOST DOG” signs all over the neighborhood? Well, as it turns out, pretty much everyone saw how problematic this offering was, especially considering what’s already known about Ring, Flock Safety, and both companies’ rather cavalier attitude towards privacy and other aspects of the Fourth Amendment.
Advertisement
To begin with, the “Search Party” feature that allows people to access recordings and images captured by other people’s cameras is already on, which likely comes as a surprise to owners of these devices. Here’s what The Verge’s Jennifer Tuohy discovered last October, shortly after Ring announced its partnership with Flock Safety — a company best known for allowing cops to hunt down people seeking abortions and/or allowing federal officers to perform nationwide searches for whoever they might be looking for (which, of course, would be anyone looking kinda like an immigrant).
[I]t turns out that Search Party is enabled by default. In an email to customers this week, Siminoff wrote that the feature is rolling out to Ring outdoor cameras in November and noted, “You can always turn off Search Party.”
I checked my cameras this morning, and they were all automatically set to enable Search Party. And I’m not alone; Ring users on Reddit have also reported that their cameras have been enabled for Search Party.
This under-reported “feature” was exposed by Ring’s Super Bowl ad, which resulted in enough backlash that Flock Safety no longer has a Ring to wear. Back to Jennifer Tuohy and The Verge:
In a statement published on Ring’s blog and provided to The Verge ahead of publication, the company said: “Following a comprehensive review, we determined the planned Flock Safety integration would require significantly more time and resources than anticipated. We therefore made the joint decision to cancel the integration and continue with our current partners … The integration never launched, so no Ring customer videos were ever sent to Flock Safety.”
While that last sentence may be true, it appears sharing was on by default when it came to Ring’s own cameras. That Flock Safety never got a chance to participate is good to know, but “Search Party” has apparently been active since its implementation last year, even if it was limited to Ring devices.
Advertisement
And while Ring claims the Search Party feature can’t be used to search for “human biometrics,” that’s hardly comforting when it appears Ring definitely wants to add more of this kind of thing to its existing cameras.
On top of this, the company recently launched a new facial recognition feature, Familiar Faces. Combined with Search Party, the technological leap to using neighborhood cameras to search for people through a mass-surveillance network suddenly seems very small.
Ring insists this is not another mass surveillance tool, but rather something that attempts to recognize who’s at any user’s door when sending alerts, in order to differentiate friends and family members from strangers who might be within camera range. Again, there’s some utility to this offering, but the tech lends itself to surveillance abuses, especially when law enforcement may only be a subpoena away from accessing images and recordings captured by privately-owned devices.
Finally, the statement given by Ring only states that this won’t be happening right now, which is a wise choice considering its unpopularity at the moment. But that doesn’t mean Ring and Flock won’t seek to consummate this marriage of surveillance tech, albeit in a more private fashion that doesn’t involve alarming hundreds of millions of sports viewers simultaneously.
In today’s world, where more and more lives are online, social proof is very important. Things like likes, shares, reviews, and follower counts show what people think about something. These signs help people feel trust and want to join in. But, many do not read these numbers in the right way. Some feel short-term jumps matter most, but they miss seeing steady growth that can last. To get the best out of this, marketers need a clear plan for checking, giving credit, and trying out new ideas.
Social Proof Metrics
Social proof is not only about numbers that look good. It shows how people see your trust and fame. But the kind of feedback you get is not always the same. Likes and comments of how many followers you have have all given useful clues. Still, you have to look at the whole story to make the best choices.
Conversion rate: Tracks if social proof makes people do things we want, like signing up, downloading, or buying.
Retention metrics: Shows if the first interest turns into regular use or keeps people coming back over time.
Sentiment analysis: It looks as if the social proof shows good or bad feelings from people.
By looking at these numbers, marketers can see the difference between quick jumps in activity and real engagement. Stormlikes help them know what their audience truly cares about.
Attribution Challenges in Social Proof
One of the biggest challenges when using social proof is knowing where to give credit. Many campaigns can give a short burst of attention, but it’s very important to find out if these jumps in attention last over time. A lot of problems with tracking happen when people just look at simple numbers and do not link them to bigger business goals.
Last-click bias: Looking only at the last thing people do can make the effect of a social proof tactic appear bigger than it really is.
Channel overlap: Organic and paid campaigns often cross over, and this can make it hard to tell the effects apart.
Short-term spikes: A boost that happens for a short time, like from paid follower services or viral posts, may not show true growth in the long run.
Marketers need strong analytics systems to know which actions really help people buy and come back again, not just make the numbers look high.
Experimental Approaches to Measure Authentic Uplift
Testing is important when you want to see if your social proof ideas work. The only way to know the real effect of social proof on people is to do controlled experiments. This helps marketers find out what works and make choices using facts and data.
A/B testing: Compare content that has social proof and content that does not. This helps you see the differences in how people behave.
Time-based experiments: Add social proof slowly over time. Watch for short-term changes and also keep an eye on the bigger trends.
Geo or segment tests: Use social proof in certain groups or places. This lets you see the effect on people in one area or segment.
When you use these experiment ideas along with clear KPIs, you can tell the difference between short-term buzz and real growth.
KPIs to Track for True Social Proof
To make social proof work, marketers have to use both numbers and stories as key points. Do not look at just one simple sign, because that can give the wrong idea.
Advertisement
Quality of engagement: Not every like means the same thing. Comments, shares, and mentions show more interest.
Follower growth rate: A steady increase in followers can say more than a quick jump.
Referral traffic: Shows if people come from social proof to take useful actions on your important pages.
Customer value over time (CLV): Links social proof campaigns to results that matter for your business in the long run.
Influencer amplification: Find out if popular supporters really help their followers trust the brand.
These numbers show how social proof works. Marketers can use this to make their campaigns better and get results that last.
Ethical Considerations for Practices
It is important to think about ethics when you try out ways to use social proof. If you use numbers that are not real, or if you show fake likes and shares, people will not trust your brand. Here are the best things you can do:
Transparency: Clearly tell people about any paid work or testing.
Gradual scaling: Try things out on a small scale to stop fake excitement.
Complementary strategy: Use social proof with top content and true messaging.
Ethical testing helps keep growth safe and steady. It makes sure your work fits with what people expect and trust.
Social proof can help your brand, but you cannot judge its effect just by looking at surface numbers. A platform like Stormlikes may help when you test things, but only if you use it in the right way and measure the results well. Knowing how the data behind social proof works helps marketers come up with plans that keep people engaged for a long time and make your brand look good. If you understand what driving action is, you will do better in the long run.
Watch ITVX when outside the UK with NordVPN (exclusive free gift)
Airs Monday, 23 February
The Love Island All Stars season 3 finale airs on Monday, 23rd February so expect more twists before we find out if frontrunners Sean Stone & Lucinda Strafford will be crowned champions. But did you know that some viewers can watch Love Island for free with this streaming hack…
Here’s the hack: In the UK, all episodes of Love Island All Stars season 3 are available on ITVX. And guess what, it’s a totally free service.
Yep, you could binge the entire latest season of the dating show, including Saturday’s final episode of 2026, without paying a thing!
Outside the UK? If you’re away from Britain: use a strong VPN to access your free ITVX stream from anywhere.
Advertisement
How to watch the Love Island All Stars final for free
Set your VPN back to your usual UK location and you’ll find that you can sign up to ITVX.
It’s clearly a popular workaround for accessing Love Island All Stars without paying. If you fancy trying out the (frankly awesome) NordVPN, we’ve got a great free gift for your below…
Advertisement
How to watch from abroad (free gift)
NordVPN is our best VPN (we actually have our own in-house expert, Mike, who tests VPNs 24/7 and he rates NordVPN top for price, features, security, etc).
We also find Nord works best for streaming – allowing you to access your domestic streaming services when abroad.
You can sign up in minutes and start watching Love Island All Stars free…
Advertisement
Quick start: Using a VPN to watch Love Island All Stars final free
Once you’ve signed up with your VPN:
1. Open the NordVPN app.
2. Connect to a server based in the UK (London, etc).
Advertisement
3. Fire up ITVX. If that doesn’t work, try it in Google Chrome’s Incognito mode and you should be off to the races.
4. Watch the Love Island 2026 All-Stars final on streaming at no cost.
In conclusion
It’s been a rollercoaster season of Love Island All Stars and with the bumper 95-minute final episode fast approaching on Monday, February 23, the drama shows no signs of slowing down.
Advertisement
After plenty of romance, dumping and the chaos of ‘Hurricane Belle’, but with plenty of couples still left in the villa, there’s more heartbreak to come before all is said and done. And UK audiences are loving it, with this season already amassing over 53 millionstreams on ITVX.
US audiences can watch on Peacock, of course, but Brits away from home willing to use a VPN can still stream the entire season completely free with a good VPN.
If you’re awaiting the chaotic conclusion of the final installment of Love Island All Stars season 3, this might be the smartest – and cheapest – way to do it.
Advertisement
You may also be interested in…
We test and review VPN services in the context of legal recreational uses. For example: 1. Accessing a service from another country (subject to the terms and conditions of that service). 2. Protecting your online security and strengthening your online privacy when abroad. We do not support or condone the illegal or malicious use of VPN services. Consuming pirated content that is paid-for is neither endorsed nor approved by Future Publishing.
Despite AI’s progress in building complex software, the ubiquitous PDF remains something of a grand challenge — a format Adobe developed in the early 1990s to preserve the precise visual appearance of documents. PDFs consist of character codes, coordinates, and rendering instructions rather than logically ordered text, and even state-of-the-art models asked to extract information from them will summarize instead, confuse footnotes with body text, or outright hallucinate contents, The Verge writes.
Companies like Reducto are now tackling the problem by segmenting pages into components — headers, tables, charts — before routing each to specialized parsing models, an approach borrowed from computer vision techniques used in self-driving vehicles. Researchers at Hugging Face recently found roughly 1.3 billion PDFs sitting in Common Crawl alone, and the Allen Institute for AI has noted that PDFs could provide trillions of novel, high-quality training tokens from government reports, textbooks, and academic papers — the kind of data AI developers are increasingly desperate for.