Although not as reviled as the sound of nails on chalkboard, the sound of adhesive tape being peeled is quite probably at least as distinctive. With every millimeter of the tape’s removal from the roll sounding like it’s screaming in protest, it has led some to wonder just why this process is noisy enough to be heard from across an open-plan office. Recently [Er Qiang Li] et al. had their paper on a likely theory published in Physical Review E, in which they examine the supersonic air pulses at the core of this phenomenon.
The shockwaves produced by peeling tape, captured on Schlieren imaging. (Credit: Er Qiang Li et al., 2026)
Using rolls of adhesive tape and two microphones synchronized with two high-speed cameras in a Schlieren imaging setup, they gathered experimental data of this stick-slip mechanism. Incidentally, in addition to this auditory effect, adhesive tape is also known for the triboluminescence effect, as well as the generating of X-rays, making them quite the source of scientific demonstrations, even when they’re not also being used to create graphene with.
What they deduced from the recorded data was that the transverse fractures that suddenly appear after the extended stick phase hold a vacuum until they reach the end of the fracture during the brief slip phase, at which point the vacuum collapses very suddenly. This produces a pressure of 9600 Pa and clearly visible shock fronts on the Schlieren images.
Now that we know why peeling adhesive tape from its roll is so noisy, it won’t make it any more quiet, but at least we can add another fascinating science fact to its role of achievements.
Looking for the most recent Mini Crossword answer? Click here for today’s Mini Crossword hints, as well as our daily answers and hints for The New York Times Wordle, Strands, Connections and Connections: Sports Edition puzzles.
Need some help with today’s Mini Crossword? It’s not too tough, but 7-Across made me stop and start thinking of five-letter beverage brands. Read on for all the answers. And if you could use some hints and guidance for daily solving, check out our Mini Crossword tips.
If you’re looking for today’s Wordle, Connections, Connections: Sports Edition and Strands answers, you can visit CNET’s NYT puzzle hints page.
Samsung’s next generation of foldable phones could bring some changes to charging, though not all of them might be what fans are hoping for. According to recent certification listings spotted via SammyGuru, upcoming devices like the Galaxy Z Fold 8 and a new “Wide Fold” variant have appeared on China’s 3C database, hinting at potential updates to charging capabilities.
Samsung
These listings typically reveal wired charging specs ahead of launch, making them an early indicator of what to expect. But here’s the catch: the “upgrade” might not be as big as it sounds.
What do the leaks actually reveal?
Two upcoming devices, SM-F9710 and SM-F9760, are believed to be the Chinese variants of the Galaxy Z Fold 8 and a new “Galaxy Z Wide Fold.” These listings show support for 15V at 3A charging, which translates to 45W wired charging. If accurate, that would mark a noticeable jump over previous Fold models, which have typically been limited to 25W wired charging.
SammyGuru
However, a separate listing for what’s believed to be the Galaxy Z Flip 8 shows 9V at 2.77A (~25W) charging, essentially unchanged from its predecessor. So while the Fold lineup may finally see a boost, the Flip series appears to be sticking with the same charging speeds for now.
How big of an upgrade is this?
For the Fold lineup, this is actually a meaningful upgrade. Samsung has stuck with 25W charging for years, so moving to 45W would finally bring it closer to its Galaxy S Ultra devices and noticeably cut down charging times. That said, these numbers only apply to wired charging, as 3C listings don’t reveal wireless speeds.
Nirave Gondhia / Digital Trends
For buyers, this is a welcome but uneven improvement. The Fold 8 and Wide Fold could see a solid boost, while the Flip 8 may remain unchanged, creating a clear divide in the lineup. It’s a step in the right direction, but not quite the full upgrade many were hoping for. Especially when you already have players like OnePlus and other Chinese brands that go well beyond 100W.
Elon Musk made a game-changing announcement hours ago when he revealed plans for Tesla’s Terafab during a live event, taking its work on vehicles and robots literally out of this world. The initiative is a game changer, bringing together SpaceX and xAI to create the world’s largest chip factory. The sheer scale of the operation is mind-boggling, since Terafab will be capable of producing 1 trillion watts of finished chips every year, all under one gigantic roof that will house logic circuits, memory storage, and final packaging.
All of this is important because we desperately need a reliable mechanism to generate solar energy that can be beamed back from space. Terafab is specifically built to accomplish just that. We’re talking about launching an incredible 100 million tons of capture equipment into orbit EVERY YEAR. To accomplish this, we must be able to move millions of tons of material year after year. Once in orbit, solar-powered satellites will conduct all of the AI heavy lifting, with millions of Tesla Optimus robots on hand to erect and maintain those structures well above the good old earth.
Each of those Optimus robots is a significant undertaking, as they require between 100 and 200 billion watts of chips just to function. When you factor in the satellites, you can see the tremendous demand we’re talking about: trillions of watts of chips that no existing chip manufacturer can possibly offer, at least not yet. According to projections, we will have the same shortage until 2030.
That is where Terafab comes in, since it is specifically designed to bridge that gap, with the kind of huge capacity that has the ability to overcome the hurdles that have been holding back both ground-based robot fleets and processing power in orbit. To get it erected, the construction team will use established launch techniques to transport the enormous cargo into place. To get the factory up and running, robots that are already in development will take on assembly tasks that are simply too dangerous for humans to do on a regular basis. As a result, we will have a consistent supply of chips to meet our rising requirements on Earth and beyond.
Advertisement
ELON MUSK: “We’re starting off with an advanced technology fab here in Austin, and I’d like to thank @GregAbbott_TX and the state of Texas for the support.
So in the advanced technology fab, we will have all of the equipment necessary to make a chip of any kind logical memory,… pic.twitter.com/kQ1r5pCgcn
The driving factor behind all of this is a strong desire to explore the universe, not just envision what’s out there, but to experience it firsthand. As one of the speakers put it, “understanding comes only from direct experience out there in the universe,” and Terafab is the first step in translating that idea into something concrete, something that anyone can track, from the start of creation to the end of delivery.
The decision, made public on Thursday, concludes that Apple’s latest implementation of pulse-oximetry functionality falls outside the scope of Masimo’s asserted rights. The full ITC commission will now review the judge’s ruling and decide whether to adopt it – a step that will determine whether the redesigned watches remain protected… Read Entire Article Source link
The 2026 C# Course Bundle offers 8 courses that cover everything C#. You’ll master the fundamentals, explore object-oriented programming, and start building your own apps in no time. It’s on sale for $40.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
‘We should regard it as a privilege to be stepping stones to higher things’: How Arthur C Clarke predicted the rise of AGI and the looming demise of humanity back in 1964
While debate over the timeline – or even the potential – for artificial general intelligence (AGI) rages on in 2026, one futurist may have predicted the breakthrough more than 60 years ago.
Noted British science fiction writer and futurist Arthur C. Clarke touted the arrival of AGI during an interview at the 1964 World’s Fair in New York City.
Speaking to the BBCat the time, Clarke’s sweeping interview predicted everything from “replicator” tools which can “make an exact copy of anything” (3D printing, perhaps?) to the creation of “intelligent and useful servants among the other animals on this planet”.
Advertisement
Article continues below
Great apes, dolphins, and whales were all noted as potential “servants” in this regard, according to Clarke. Suffice to say this prediction hasn’t materialized. What does stand out, however, are his predictions about the future of intelligent machines.
“The most intelligent inhabitants of that future world won’t be men or monkeys,” he said. “They’ll be machines, the remote descendants of today’s computers.”
Advertisement
“Present-day electronic brains are complete morons, but this will not be true in another generation. They will start to think, and eventually, they will completely out-think their makers.”
Clarke pondered whether this prospect was “depressing”, but noted that advances in technology on this front represent the next evolutionary step in humanity’s journey.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
“We superseded the Cro-Magnon and Neanderthal men, and we presume we’re an improvement,” he added.
Advertisement
“We should regard it as a privilege to be stepping stones to higher things. I suspect that organic evolution has about come to its end, and we are now at the beginning of inorganic or mechanical evolution, which will be thousands of times swifter.”
The AGI conundrum
The debate over whether AGI is even attainable has raged for some. While typically confined to the realms of science fiction, the advent of generative AI in late 2022 once again brought the topic back to the fore.
Notably, debate over the actual definition of AGI is a key sticking point for many in the industry, and society more broadly. By Google’s definition, for example, AGI refers to:
“The hypothetical intelligence of a machine that possesses the ability to understand or learn any intellectual task that a human being can. It is a type of artificial intelligence (AI) that aims to mimic the cognitive abilities of the human brain.”
Taking this into account, it’s safe to say that humanity hasn’t reached AGI quite yet, or is anywhere close to reaching that goal. But major industry players such as OpenAI insist that reaching AGI is their ultimate end goal.
In a 2025 blog post last year, OpenAI CEO Sam Altman reflected on the company’s pursuit of this so far elusive moment, noting that progress is being made at a nominal pace.
Advertisement
“We are now confident we know how to build AGI as we have traditionally understood it,” Altman wrote.
During a September 2025 interview at the WELT AI Summit, Altman once again banged the drum for an imminent AGI breakthrough, claiming that AI will surpass human intelligence by 2030.
It’s worth noting that OpenAI’s own definition of AGI differs from that of the aforementioned Google – a fact that underscores the conflicting outlook on this subject.
OpenAI defines AGI as a “highly autonomous system that outperforms humans at most economically valuable work”. That definition might, at least in some circles, be a rather low bar to set – especially given advances in AI over the last 18 months.
Advertisement
The path to AGI is becoming clearer
The advent of agentic AI suggests that progress by OpenAI’s definition is being made to some extent. Rather than traditional AI “assistants” rolled out by big tech providers during the early days of the generative AI boom, agents are capable of autonomously conducting tasks on behalf of human workers.
That marked a step change in how enterprises and consumers alike engage with the technology, and it has wide-reaching implications for the future of work. Areas such as customer service, for example, have been firmly in the crosshairs of agentic AI providers, with these roles identified as prime candidates for automation.
In other professions, such as software development, AI is already outperforming human workers in areas such as coding.
Advertisement
To some, these advances might point toward humanity reaching the tipping point on AGI, but a key factor in whether or not AGI can be acknowledged lies in generality.
Specialist AI tools or agents aimed specifically at conducting one particular task isn’t a marker of AGI, more that these tools and bots have been trained with these tasks in mind.
However, being able to switch between tasks and carry them out at the same level of efficiency is, according to Google. Core characteristics of AGI by the tech giant’s definition include “generalization ability”.
“AGI can transfer knowledge and skills learned in one domain to another, enabling it to adapt to new and unseen situations effectively,” the company notes.
Advertisement
Common cause in big tech
Altman isn’t the only leading industry figure convinced that AGI is achievable and looming around the corner. A host of industry leaders such as Dario Amodei and Elon Musk have also touted the potential in the near future.
What these figures all have in common, however, is that their long-term roadmap is based on achieving this goal, and it’s becoming increasingly important in deals between industry players.
Clarke may have loosely predicted a future of intelligent machines capable of human-level thinking, but what he likely couldn’t have predicted is exactly how much was at stake from a financial perspective.
Hannspree Hybri monitor uses ambient light to significantly reduce energy consumption
Reflective display design aims to mimic paper-like readability and comfort
Automatic switching enables backlight use in low ambient light conditions
The Hannspree Hybri monitor attempts to merge paper-like readability with modern display performance, claiming an 80% reduction in energy use through innovative use of ambient light.
At illumination levels above 1000lux, common in offices, classrooms, and outdoor-adjacent spaces, the monitor reflects surrounding light instead of relying solely on a backlight.
This reflective approach is designed to mimic the visual comfort of paper, offering high contrast and clarity while lowering eye strain.
Article continues below
Advertisement
Hybrid operation adapts automatically
Its advanced TLCD architecture, coupled with micro-perforated backlight control, supports hybrid operation.
This allows the monitor to switch automatically to backlighting when ambient light falls below 500lux.
Advertisement
A built-in sensor adjusts brightness in ‘smart mode’, aiming to maintain eye protection and consistent visibility in fluctuating light conditions.
Measuring 23.8in, the Hybri features the ecoVISION Paper Display, which reduces harmful blue light exposure and provides a flicker-free, anti-glare experience.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
The display supports 16.7 million colors at a resolution of 1920 x 1080, with a refresh rate of 75Hz and a typical response time of 5ms.
Advertisement
Connectivity includes HDMI, DisplayPort, VGA, and USB Type-C with up to 65W power delivery, while the ergonomic stand offers tilt, swivel, pivot, and height adjustments.
“This is more than just a new product. Hybri sets a new standard for the display industry, where well-being, efficiency, and performance finally come together,” said Martin Kent, Territory Manager, HANNspree.
“It’s a new vision for how we should interact with screens. Paper-like comfort, superior eye care, and full multimedia performance in one device is the future of healthier, smarter digital work and life.”
Advertisement
This device achieves energy savings primarily through its reliance on natural light, consuming as little as 5.2W under bright conditions.
The monitor’s design emphasizes eye care, claiming zero blue light in Eye-care mode, flicker-free operation through DC dimming, sunlight readability, and anti-glare surfaces.
It also features four preset modes that optimize viewing for different tasks, from coding to reading and general productivity.
Despite these claims, the effectiveness of paper-like displays in maintaining clarity in varied lighting remains context dependent.
Advertisement
While the Hybri monitor addresses energy consumption and eye strain, the reliance on reflective light may limit brightness consistency in dim environments.
Professionals or students accustomed to traditional backlit business monitors might notice differences in color vibrancy and motion performance, particularly in low-light conditions.
The hybrid approach offers a compromise, although long-term benefits in both energy savings and visual comfort will require real-world evaluation.
Reddit may soon ask users to prove they’re human, and it might involve your face. During a TBPN podcast, Reddit’s CEO, Steve Huffman, confirmed that the platform is exploring new identity verification methods, including using Face ID or Touch ID-style authentication, to tackle its growing bot problem.
RDDT requiring Face ID was not something I had on my bingo card but something has got to be done about all the fake / botted content — I just don’t know how to sell face-scanning to redditors or even lurkers. https://t.co/7e7K3Di4ip
The idea is simple: as AI-generated accounts become more convincing, Reddit wants stronger ways to confirm that users are real people and not bots pretending to be one.
Why is Reddit considering Face ID-style verification?
Unfortunately, bots are getting too good. Huffman has previously emphasized keeping the platform “human,” and this move fits right into that strategy. AI-generated content and automated accounts are becoming harder to detect, making moderation more challenging and threatening the authenticity of discussions.
Advertisement
Moinak Pal/Digital Trends
As such, verification methods like Face ID or biometric checks could act as a quick way to confirm a real person is behind an account, without requiring traditional ID uploads. But of course, it’s not that simple.
So… are we really scanning faces now?
Reddit isn’t going full sci-fi just yet. The company is still “weighing” its options, which could mean optional verification for certain features, regions, or accounts rather than forcing everyone to scan their face. We’ve already seen a preview of this in places like the UK, where Reddit uses selfies or ID checks for age verification.
Brett Jordan / Pexels
The next step could make things feel a lot more seamless and a bit more invasive. Instead of uploading IDs, Reddit may lean on device-level tools like Face ID to confirm you’re human, turning verification into something that happens in the background rather than a full process. Of course, that’s where things get messy.
Biometric checks raise big questions around privacy, data security, and consent, and users aren’t exactly thrilled about handing over their face to prove they’re not a bot. Reddit may be solving one problem, but it opens up another: how much verification is too much? Especially on a platform where anonymity is kind of the whole point?
According to Business Insider, the issue came up during a January Google DeepMind town hall, where VP of Global Affairs Tom Lue said the company was “leaning more” into national security work. Read Entire Article Source link
Researchers are still studying samples of Ryugu collected by the Japanese Aerospace Exploration Agency from its Hayabusa2 mission. After the first papers focused on the composition of the recovered material, a Japanese team has now found a “complete” set of genetic bases belonging to both DNA and RNA. Read Entire Article Source link
You must be logged in to post a comment Login