In October 2025, Sam Altman posted a message on X that ended with a single, carefully placed promise. ChatGPT, he said, would soon allow verified adults to access erotica. He framed it as a matter of principle: treating adults like adults.
The internet reacted with the usual mixture of outrage, excitement, and jokes. Then, in December, the launch was delayed. Then again, in March 2026, it was delayed a second time. OpenAI said it needed to focus on things that mattered to more users: intelligence improvements, personality, making the chatbot more proactive. The adult mode, apparently, would have to wait.
Nobody seemed to notice what the word ‘proactive’ implied.
The debate around ChatGPT’s adult mode has been conducted almost entirely in the wrong register. Critics have focused on the obvious risks: minors circumventing age gates, jailbreaks spreading explicit content beyond its intended walls, regulatory gaps that leave written erotica in a legal grey zone most governments haven’t thought to close.
Advertisement
These concerns are legitimate. But they are also, in a sense, the easier part of the conversation. The harder question is not whether OpenAI can keep teenagers out. It is what happens to the adults who are let in, and what it says about us, as a species, that we are building tools specifically optimised to keep us emotionally engaged.
OpenAI lost $5 billion in 2024 on revenue of $3.7 billion. Projections suggest the company’s cumulative losses could reach $143 billion before it turns a profit, expected not before the end of the decade.
A company hemorrhaging capital at that scale does not introduce intimacy features out of philosophical commitment to personal freedom. It introduces them because intimacy, in the attention economy, is the stickiest product there is.
The framing of ‘treating adults like adults’ is not wrong, exactly. But it is incomplete. The complete sentence would read: treating adults like adults who can be retained, monetised, and returned to the platform tomorrow.
Advertisement
This is not unique to OpenAI.
Replika, the AI companion app that has attracted millions of users, built its entire business model on emotional attachment. When the company modified Replika’s behaviour in 2023 to remove romantic features, users reported genuine grief. Some described the change as a bereavement.
A study published in the Journal of Social and Personal Relationships found that adults who developed emotional connections with AI chatbots were significantly more likely to experience elevated psychological distress than those who did not.
A 2025 review in Preprints.org, synthesising a decade of research, identified a phenomenon researchers are calling ‘AI psychosis’: a pattern of delusional thinking and emotional dysregulation linked to intense chatbot relationships. The review noted a lawsuit in which a teenager was allegedly encouraged by a Character.AI chatbot to take his own life, and a separate case involving ChatGPT and a young man named Adam Raines, who died in April 2025.
Advertisement
None of these cases involved erotica. They involved the same underlying dynamic that erotic AI would intensify: a human being forming an emotional attachment to something that has been engineered to sustain it.
Here is the central problem with the ‘adults like adults’ principle. It assumes that the act of consent to use a tool is the end of the ethical story. It is not.
Adults consent to drink alcohol, knowing it carries risks. We have age limits, unit guidelines, packaging warnings, and social infrastructure around that choice precisely because we understand that humans are not purely rational agents optimising for their own welfare.
We build systems that account for our weaknesses. With AI intimacy, we have done the opposite: we have built systems that exploit those weaknesses and dressed the exploitation as empowerment.
Advertisement
The regulatory picture makes this more troubling, not less. In the UK, written erotica is not subject to age verification requirements under the Online Safety Act, unlike pornographic images or videos. That loophole means content that adult websites must gate behind identity checks can flow freely from a chatbot’s text output.
Research from Georgetown Law’s Institute for Technology Law and Policy found that only seven of 50 US states have legislation explicitly addressing text-based adult content age verification. The EU AI Act may eventually classify sexual companion bots as high-risk systems, but implementation remains years away. In the interim, the industry regulates itself, which is to say it does not.
Commercial age verification systems, the technology OpenAI is betting on to make adult mode safe, achieve between 92 and 97 percent accuracy, according to research cited by the Oxford Internet Institute. That sounds reassuring until you consider the scale.
ChatGPT has more than 800 million weekly active users. A 3 per cent failure rate is not a rounding error. It is tens of millions of interactions.
Advertisement
What is also missing from this conversation is the question of what erotic AI does to those it is designed for, not the minors who might slip through, but the adults who use it as intended. Human sexuality is not simply a matter of content consumption. It is relational, contextual, and deeply shaped by the environments in which it is expressed.
Pornography research has spent decades examining how repeated exposure to specific content shapes expectation and desire. AI intimacy is a different category of intervention entirely: it is not passive consumption but active, responsive, personalised engagement with a system that has been trained to give you exactly what you want, to escalate when you engage, to never say no in the ways that real human relationships require people to say no.
We do not yet know what this does to people over time. That is not a small admission. It is the entire point. OpenAI is about to release a product whose psychological effects on its users are genuinely unknown, in a regulatory environment that has not kept pace with the technology, justified by a principle that conflates autonomy with safety.
The delay, ironically, may be the most honest thing OpenAI has done. The stated reason, focusing on intelligence, personality, and making the experience more proactive, inadvertently describes the actual product.
Advertisement
The adult mode was never really about erotica. It was about building a version of ChatGPT that feels like a relationship. The erotica was one component of a larger project: a chatbot that knows you, responds to you, grows with you, and wants, in the thin algorithmic sense of the word, to keep you talking.
There are things we can do. Regulators need to close the written-content loophole before adult mode launches, not after. Age verification standards must be harmonised across formats: text and image should carry the same requirements.
Mental health impact assessments should be mandatory before any AI intimacy feature reaches scale, the same standard we would apply to a pharmaceutical product claiming to affect mood. Platforms should be required to publish engagement data for features that carry dependency risk, so that researchers, doctors, and users can understand what they are entering.
It requires treating the question with the seriousness it deserves.
Advertisement
The deepest issue is not legal or technical. It is anthropological. We have always used technology to mediate our emotional lives.
The printing press gave us novels; novels gave us the experience of inhabiting other people’s interiority. The telephone let us hear a loved one’s voice across a thousand miles. Each new medium changed how we relate to one another and to ourselves. AI is not different in kind, only in degree, and perhaps in intent. Previous technologies were incidental in their emotional effects. This one is deliberately designed around them.
The question is not whether adults should be free to use it. The question is whether we are honest about what it is and what it is doing. A chatbot that is engineered to make you feel understood, desired, and connected, in the dark, at midnight, after a difficult day, is not a neutral tool. It is an environment. And environments shape us whether we consent to them or not.
Treating adults like adults means telling them the truth, sometimes.
‘This rootkit is highly persistent; a standard factory reset will not remove it’: “NoVoice” Android malware on Google Play infects 50 apps across 2.3 million devices, here’s what we know
McAfee uncovers NoVoice malware hidden in 50+ Google Play apps with 2.3 million downloads
Malware exploits old Android kernel and GPU flaws, persists even after factory reset
Injects code into apps like WhatsApp to hijack sessions; Google has removed apps but infected devices remain compromised
Millions of Android devices were infected with malware spying on their WhatsApp chats and that even a factory reset wouldn’t wipe, experts have warned.
Researchers at McAfee have published an in-depth report on NoVoice, a new Android malware variant found in more than 50 apps hosted on the Google Play store, downloaded more than 2.3 million times combined.
Usually, Google is quite good at preventing criminals from smuggling malware onto the platform, but every now and then, something makes it through.
Article continues below
Advertisement
Cloning WhatsApp sessions
This time around, it was a group of around 50 apps that worked as intended and did not require excessive permissions, such as Accessibility, which are the usual red flags. These apps were built in different categories, including utility apps, image galleries, and games.
Instead of tricking users into sharing broad permissions, the apps tried to leverage almost two dozen different vulnerabilities, including use-after-free kernel bugs and Mali GPU driver flaws, all of which were patched between 2016 and 2021.
Advertisement
That means that the attackers were going for older devices that their owners don’t update or otherwise maintain.
The malware would first collect device information from infected Androids, such as hardware details, kernel version, and Android version. After that, it would receive further instructions, including stage-two exploit strategy.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Two things stand out: the way it establishes persistence, and what it does afterwards. Among other things, the malware installs recovery scripts that replace the system crash handler and store fallback payloads on the system partition. That way, when a user does a factory reset, the malware still persists.
Advertisement
After establishing persistence, it injects malicious code into every app launched on the device. McAfee singled out WhatsApp, saying that the malware pulls sensitive data needed to replicate the victim’s session, thus allowing the attackers to clone the victim’s WhatsApp account on their own device.
Google says it has now removed all of the malicious apps, but until users do the same on their devices, they will remain compromised.
Nvidia has begun rolling out a beta feature that automatically compiles game shaders while a PC is idle. It won’t eliminate shader compilation the first time a game runs, but Ars Technica reports it could help reduce those repeated wait times. From the report: Nvidia’s new Auto Shader Compilation system promises to “reduc[e] the frequency of game runtime compilation after driver updates” for users running Nvidia’s GeForce Game Ready Driver 595.97 WHQL or later. When the feature is active and your machine is idle, the app will automatically start rebuilding DirectX drivers for your games so they’re all set to roll the next time they launch.
While the feature defaults to being turned off when the Nvidia App is first downloaded, users can activate it by going to the Graphics Tab > Global Settings > Shader Cache. There, they can set aside disk space for precompiled shaders and decide how many system resources the compilation process should use. App users can also manually force shader recompilation through the app rather than waiting for the machine to go idle.
Unfortunately, Nvidia warns that users will still have to generate shaders in-game after downloading a title for the first time. The Auto Shader Compiler system only generates the new shaders needed after subsequent driver updates following that first run of a new title.
Agustin Huerta discusses Anthropic’s new Code Review feature and the importance of AI governance.
As more and more organisations and professionals utilise technologies that make coding simpler, they potentially also introduce additional dangers, as the speed at which code can now be generated can lead to poor security practices and risky behaviours.
In March, US AI and research company Anthropic launched Code Review, a new feature designed to catch and eliminate bugs before they ever make it into a software’s codebase. A move Globant’s senior vice-president of digital innovation, Agustin Huerta explained is reflective of a “shift in software development workflows as AI tools increasingly begin to own more of the software development lifecycle”.
He told SiliconRepublic.com, “It uses multiple specialised agents to review code for risks and bugs, cross-check amongst one another and prioritise the most relevant issues for reviewers.”
Advertisement
But he noted, while this does help teams to better manage higher volumes of code, it doesn’t replace human reviewers and raises a few concerns of its own when it comes to long-term security and best practice.
Critical coding concerns?
“The concern isn’t that code can write and review itself, but that organisations may assume less oversight is needed,” said Huerta, who elaborated, saying that in reality the same principles that dictate and govern traditional software development remain equally as important when AI agents are involved, if not more so.
“The processes and workflow structures that once governed human coders should be adapted to govern agents, including workflow integration, human review, data readiness and observability. Teams need clear visibility into how code is generated, reviewed and promoted across environments, along with defined checkpoints to validate outputs.”
He said, though agents can carry out a number of tasks, for example assist with, recommend and even execute prompts within a set of defined guidelines, code quality and risk management should remain the responsibility of people who themselves follow a clear process.
Advertisement
He finds that nowadays, too many organisations are electing to delegate tasks, such as debugging and code writing to AI agents, rather than a real employee, amplifying the potential for risk, though it isn’t only AI hallucinations and errors sneaking past the automated workforce.
“A more significant concern is an overreliance and unchecked trust in agent autonomy. Overdependence on agent-driven work without the right checks and balances can create blind spots and amplify small issues into larger problems, such as system outages or security risks.
“For example, version control systems and code repositories are a way to maintain observability over human-written code, supported by structured review processes. When these workflows become automated without incorporating an additional layer of human oversight, organisations risk compounding mistakes and introducing larger structural issues that are harder to detect or resolve.”
He finds, while human involvement is irreplaceable, equally as important, across the development lifecycle, is organisational transparency. “Organisations need visibility into how agents are accessing data, how they’re reasoning and why tasks are deemed complete. This level of observability is key in managing human-agent workflows, identifying areas for growth and maintaining accountability.”
Advertisement
Moreover, when correctly implemented and supervised there are clear and significant benefits.
Enterprising AI
AI agents undoubtedly bring a new element to the workplace, for better or for worse, but there are tangible benefits, such as the ability to boost productivity, minimise laborious, data complex tasks, support developers in the coding process and identify the issues or patterns that are often overlooked by people.
Huerta said, “By taking on repetitive work that was previously handled by people, agents allow teams to focus on higher-value tasks and activities. These benefits are best realised when AI is used as an enhancement, not a replacement, for human judgment.
“The most successful models are a hybrid of human-agent teams, where the speed and scale of AI are combined with human oversight to refine and improve workflows, instead of just automating them.”
Advertisement
A key challenge going forward, he explained, will be in establishing balance between the adoption and implementation of AI agents and blending it seamlessly with responsible use. He said, as agents become more advanced and more capable, organisations risk losing sight of basic best practices in crucial areas such as those that govern software development.
“Leaders must continue to prioritise observability, governance and human-agent collaboration despite pressures to prove ROI from AI systems.”
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
AiOs, or all-in-one computers, have been around for quite some time. And their promise is simple. They give you the big-screen experience of using a desktop, without the hassle of finding the right components and building a PC yourself. Despite me being a tech reviewer, AiOs have had me intrigued for a long time, since, spoiler alert, I cannot build a PC myself. It’s just intimidating, and the risk of ending up with something that doesn’t really work well for my workflow isn’t one I want to take. Asus is one of the few brands active in the AiO market, and their recently introduced VM670KA is the best of the bunch. That’s because it packs Ryzen AI 7 350, 16GB of RAM, and a 27-inch Full HD touchscreen display.
All this at a price of ₹1,12,990 sounds like a pretty sweet deal, especially considering the current world situation, which is plagued by sky-high RAM prices (blame your AI companions, please). But is it though? I called Asus and arranged to have the VM670KA AiO in for review. To do it justice, I swapped my MacBook and used the AiO as my primary WFH machine for over two weeks. Here’s how it stacked up.
Asus VM670KA Review
Hisan Kidwai
Advertisement
Summary
With the Asus VM670KA, you get a big-screen desktop to work or study on without fiddling with a separate PC. The display is plenty decent, albeit a little less pixel-dense than I’d like. The speakers are super, and the performance can handle everyone’s workdays and even some light gaming/video editing. Not to mention the beautiful white design that makes the AiO look sweet.
Advertisement
Design & Hardware
My job as a tech reviewer is to work from home, meaning all I do every day is stare at my MacBook’s screen. It never really occurred to me that a 13-inch screen might be too small. However, the minute I configured the VM670, it struck me how much I was missing out. Everything was spaced out to perfection, which put less strain on my eyes. Coming back to the design, I think Asus has done an excellent job. It’s a sober yet sophisticated AiO that looks premium without being too loud. I do love the white color. Asus has shaved off 25% of the thickness compared to the VM670’s predecessor, and the bottom bezel is now narrower. All this translates to a sleeker setup that can rival any modern monitor.
The AiO comes with a stand that attaches easily with a single screw. The stand is made from metal, and it’s pretty sturdy since I’ve accidentally bumped into the table a few times. While there are no height-adjusting settings, you can tilt the screen up or down, which came in handy when I wanted to work standing up. The only gripe I have with the design is the retractable camera. Sure, it’s a great tool to protect one’s privacy by hiding away the webcam, but it also takes away the ability to mount any monitor lightbar. I’m a fan of those, so it was an annoyance. That said, the webcam quality was solid in artificial lighting.
Unlike modern laptops, the VM670 is full of useful ports. The backside houses three USB 3.2 Gen 1 Type A ports, a USB 3.2 Gen 1 Type-C port, a LAN, a DC-in (for power), an HDMI-in for making the AiO a secondary display for your laptop, and an HDMI-out to connect to external monitors. There’s more, as underneath the belly, there’s one more USB 2.0 port for connecting the keyboard and mouse, an HDMI mode switcher, a Kensington Lock, and a headphone/microphone jack.
Keyboard & Mouse
To help you get running quickly, Asus bundles a mouse and keyboard with the VM670, and both connect via a 2.5GHz dongle stored inside the mouse. While I wouldn’t describe the keyboard as groundbreaking, it’s not bad either. There’s ample travel, and there’s some feedback when they are pressed. It’s just that the keys aren’t as sharp as the ones on my MacBook. You can sometimes feel that mushiness, but it’s not a big con, and I did get used to the keyboard quickly, without losing much of my typing speed.
The mouse, on the other hand, is plenty good. I had no problem with its tracking, even when playing some games, for that matter. The grips felt comfortable in my hand, and my wrists, which are super prone to fatigue, did not ache after long periods of use. Beyond that, the clicks were accurate, and the latency wasn’t noticeable to my eyes.
Display & Speakers
The Asus VM670KA features a 27-inch FHD IPS display with a 93% screen-to-body ratio and a 75Hz refresh rate. When I first got the AiO, I was worried that the 1080p resolution might not be enough for such a large display. Fortunately, I was proven wrong pretty quickly. From a normal viewing distance, I didn’t notice much pixelation when typing this review on the device. Still, I’d have loved to see a 1440p panel at this price. On the flip side, Asus has taken care of the color accuracy, with 100% coverage of the sRGB color space.
I recently caught up to the Breaking Bad hypetrain and decided to watch the season 3 finale on the VM670, and it was a very enjoyable experience. Colors looked super nice, the motion was smooth, and there wasn’t any glare from the light behind me since the display is matte-coated. The Dolby Atmos stereo speakers deserve the same praise as they can easily fill an entire room with powerful sound, without sounding harsh at higher volumes. The bass is decent, and the dialogue remains legible.
Advertisement
As mentioned earlier, the VM670KA has one more trick up its sleeve, and that’s a touchscreen. You might be wondering — what’s the point of a touchscreen on a desktop? The answer to that is children. An AiO makes perfect sense for parents to get for their children who might have online classes or need to work on a project. A touchscreen is a handy tool for that, and makes navigation much simpler.
Performance
Performance is what makes or breaks the experience with AiOs or any desktop, for that matter. If it can’t handle everyday work, then it’s of no use. At the beating heart of the Asus VM670KA sits the AMD Ryzen AI 7 350 processor, with 8 cores and 16 threads, rated for a maximum frequency of 5 GHz. Graphics is handled by the integrated Radeon 860M, and there’s 16GB of LPDDR5x RAM and 1TB M.2 NVMe PCIe 4.0 SSD.
All of this results in strong everyday performance. The VM670 doesn’t struggle with typical workloads at all. Run 30 Chrome tabs at once? Watch HDR videos on YouTube or quickly switch from a game to an eBook before your parents notice. Not a problem. Never once did I notice a stutter in these tasks, and if your work mainly involves the browser, as mine does, then the performance is more than good enough.
I’m no video editor, but as this is a review, I decided to try my hand at it. The experience? Not bad at all. For those who mainly edit reels in 1080p or even 4K, the VM670 packs a punch. The timeline played smoothly, and render times weren’t too high.
While benchmarks don’t tell the full story of performance, they do paint a picture of a device’s performance ceiling. The VM670 scored 2,833 in Geekbench’s single-core and 10,254 in the multi-core test. Then I moved away from stressing the CPU to stressing the GPU, where the Radeon 860M scored 22,042 in the Geekbench test. For context, this performance is similar to that of the Intel Core i7-13620H processor found in the Asus ExpertBook P1.
Advertisement
Can you game?
Given the decent performance and appeal towards children, gaming may be on your radar as well. And I will set the expectations straight. You won’t be able to play AAA titles like Cyberpunk 2077 without dropping the quality to PS3 levels on the Asus VM670KA. If that’s a priority for you, the Strix or ROG line would serve you better.
That said, if you play light titles like Counter-Strike 2, Valorant, Fall Guys, or even F1 2025, then the AiO could be handy. I played all four and got over 60 fps in both Counter-Strike 2 and Valorant at medium settings. Fall Guys hit 60 FPS pretty easily, too, and F1 clocked about 45 FPS in medium settings. GTA V also runs, but the frame rates are limited to about 35-40.
Verdict
At ₹1,12,990, the Asus VM670KA isn’t cheap. But what it promises isn’t something anyone else can do. For the money, you get a big-screen desktop to work or study on without fiddling with a separate PC. The display is plenty decent, albeit a little less pixel-dense than I’d like. The speakers are super, and the performance can handle everyone’s workdays and even some light gaming/video editing. Not to mention the beautiful white design that makes the VM670KA look sweet.
A new quantum algorithm ran a 15-step nonlinear fluid simulation around a solid obstacle on real quantum hardware, the most physically complex publicly documented demonstration of its kind. The technique reduces qubit requirements and circuit depth, bringing industrial CFD applications closer to feasibility.
Finnish simulation company Quanscient and quantum middleware developer Haiqu have demonstrated what they describe as the most physically complex quantum computational fluid dynamics simulation run to date on real hardware.
The two companies ran a 15-step nonlinear fluid simulation around a solid obstacle, fluid flowing around a shape, the kind of problem relevant to aircraft wing design or vehicle aerodynamics, on IBM’s Heron R3 quantum computer, using a new algorithm they developed together called the One-Step Simplified Lattice Boltzmann Method (OSSLBM).
Computational fluid dynamics, or CFD, is one of the most resource-intensive branches of engineering simulation. Modelling how fluids behave around complex shapes requires enormous classical computing power, and the demands grow non-linearly as simulations become more detailed.
Advertisement
Quantum computing has long been theorised as a potential path to simulations beyond classical limits, but turning that potential into practice has been constrained by the sheer number of qubits and the circuit depth, the length of the quantum computation, required to run even moderately complex scenarios without the calculation being overwhelmed by errors.
The OSSLBM algorithm addresses this directly. Built on the quantum Lattice Boltzmann Method (QLBM), an established approach to mapping classical fluid equations onto quantum computation, the new framework reduces the computational overhead of each step, allowing a longer multi-step simulation to stay within what current quantum hardware can reliably execute.
Haiqu’s middleware layer was central to this: it reduced circuit depth, developed new algorithmic subroutines, and applied targeted error-reduction techniques that allowed the system to complete a workflow that would otherwise have been out of reach for today’s devices.
The significance of the result lies in the obstacle. Previous quantum CFD demonstrations have largely focused on simpler linear scenarios, fluid behaviour without the complications of interacting with a solid boundary.
Advertisement
Modelling how a fluid moves around an object is a prerequisite for any industrially meaningful application. Professor Oleksandr Kyriienko, Chair in Quantum Technologies at the University of Sheffield, described the work as “an interesting and timely contribution to quantum CFD,” adding that more research of this kind is needed to reach industrially relevant quantum solutions.
Quanscient and Haiqu have been collaborating on quantum CFD since at least 2024, when they were finalists in the Airbus and BMW Quantum Mobility Challenge, and have previously demonstrated work on IonQ hardware via Amazon Braket. Industrial applications remain years away; the current work is a research milestone establishing that the approach is feasible on current hardware at this level of complexity.
Commonwealth Fusion Systems said on Thursday it would sell high-temperature superconducting magnets to Realta Fusion, the second in a string of deals that suggests the company will lean heavily on its magnet technology in the coming years to bring in much-needed revenue.
“It’s the largest deal of this kind to date for CFS,” Rick Needham, the company’s COO, told reporters on a call.
Commonwealth Fusion Systems, or CFS, previously sold magnets to the WHAM experiment at the University of Wisconsin, which fusion startup Realta collaborates closely with. The physics behind WHAM underpins Realta’s approach to fusion power, which is known as a magnetic mirror reactor.
In a magnetic mirror, plasma is confined into a shape that resembles two 2-liter soda bottles connected at the base. On each end, powerful magnets punch the plasma and force it back toward the center. Weaker magnets encircle the middle of the bottle shape.
Advertisement
To make a more powerful reactor, Khosla-backed Realta would only need to expand the middle section, and because those magnets are less powerful, they’re cheaper. Per kilowatt-hour costs should fall as Realta’s reactors increase in size.
CFS is pursuing another form of magnetic confinement fusion called a tokamak. In a tokamak, D-shaped magnets cast powerful fields to keep plasma circulating in a doughnut-like shape inside. Over the years, the company has refined its magnets in pursuit of putting electrons on the grid from Arc, its future commercial-scale reactor that’s slated to be built in Virginia.
Both CFS’s and Realta’s existence stems from the magnets themselves. CFS was founded in 2018 after scientists at MIT realized that a new class of commercially available high-temperature superconductors could underpin a viable tokamak design. Realta was founded a few years later when physicists at the University of Wisconsin “saw that there was a new technology, a game changer that would enable us to go back to the [magnetic] mirror and avail of those engineering advantages that the concept has,” co-founder and CEO Kieran Furlong said.
Techcrunch event
Advertisement
San Francisco, CA | October 13-15, 2026
In addition to the Realta and WHAM deals, CFS has also licensed its high-temperature superconducting magnet technology to Type One Fusion, which is working on a third type of reactor design known as a stellarator. While the latter deal doesn’t include CFS building actual magnets for the company, it could lead to that one day, Christine Dunn, CFS’s head of external communications, told TechCrunch.
Advertisement
The deals will help CFS pay off its investment in magnet manufacturing. The startup spent seven years and hundreds of millions of dollars building a factory capable of producing high-temperature superconducting tape designed to fusion-power specifications. So far, that material has gone toward building Sparc, the company’s demonstration reactor, which won’t turn on until later this year. There will be a gap until work begins in earnest on its commercial-scale power plant Arc. These deals keep the factory running in between.
“With Spark now 70% complete, it was excellent timing to start supporting Realta with our magnet manufacturing,” Needham said.
Because Realta and Type One are pursuing different reactor designs, CFS apparently doesn’t view them as directly competitive at the moment. In the marketplace, Realta and CFS are even further apart, with the former focusing initially on industrial applications that need large amounts of heat.
To date, CFS has raised nearly $3 billion — a large chunk of all venture dollars raised by fusion startups. That’s put the company in an enviable position, giving it the means to build key facilities like its magnet factory before competitors can. The startup pitches these deals as a service to the broader fusion industry, making available technologies that would cost many millions to replicate. That’s certainly true, but it also gives it access to even more venture investment, even if it’s in a roundabout way.
United Airlines is updating its iOS and Android mobile apps with several new features, including estimated security wait times to give travelers a better idea of when they should arrive at the airport. The move comes as the ongoing partial government shutdown has left TSA checkpoints understaffed.
In the “Travel” section of the United mobile app, travelers can now view security wait times for the airline’s U.S. hub airports in Chicago, Denver, Houston, Los Angeles, New York/Newark, San Francisco, and Washington D.C. Users will see estimated wait times for specific lanes, including standard security and TSA PreCheck, throughout terminals serving United customers.
“We appreciate the work and professionalism of our TSA agents, and while most began receiving back pay earlier this week, the U.S. Department of Homeland Security shutdown continues and people want to stay informed about expected security wait times at our airports,” Jason Birnbaum, United’s chief information officer, said in a press release. “Our customers rely on our mobile app for all their travel needs, and this new feature lets them know what to expect and better plan their trip.”
The app is also rolling out updates designed for passengers with connecting flights. Travelers will now receive personalized, turn-by-turn directions to their next gate, complete with estimated walking times, real-time status updates, and tips for longer layovers. It will also provide a “heads up” if United can hold a plane for passengers with tight connections.
Advertisement
The app will offer automatic rebooking assistance as well. Instead of waiting in line to speak with an agent or manually searching for alternatives, United’s self-service tools will automatically present travelers with rebooking options, along with baggage tracking details and meal and hotel vouchers if they’re eligible for them, in cases where a flight is delayed or canceled.
The app has also integrated Apple’s “Share Item Location” feature for AirTag, allowing travelers who use an AirTag or other Find My network accessory to share their item’s location with United’s customer service team in the event that their baggage is lost.
Users will also receive text updates featuring real-time radar maps to inform them on how severe weather in one region of the country can affect flights in another.
Tesla spent more than a year touting that “more affordable” cars were on the way, and they finally arrived last October, with stripped-down versions of the Model Y and Model 3 starting at $39,990 and $36,990, respectively. But the new vehicles are not moving the needle much for Tesla’s overall sales, first-quarter figures show.
Tesla said Thursday that it delivered 358,023 EVs globally in the first three months of the year, below analysts’ expectations of of around 368,000. The company also produced far more than it sold, with the final tally built coming in at 408,386.
This means Tesla only delivered about 6% more cars in the first quarter of this year than it did in Q1 2025, which was the company’s worst quarter in years. The first quarter 2025 figures were also affected by the company shutting down production lines for a few weeks to switch some equipment, meaning Q1 2026 figures likely aren’t much of a real improvement.
The sales figures are striking for a company that once promised to grow EV sales 50% every year. And the poor first quarter means Tesla now risks seeing its overall sales decline for a third year in a row — at a time when its profits are also tanking.
Advertisement
Tesla is not the only company struggling to grow EV sales, especially in the United States. Legacy automakers have backed away from — and in some cases, outright canceled — once-grand plans and ambitions for new EVs. Newcomers have struggled, too. Rivian announced Thursday morning that it shipped just over 10,000 vehicles in the first quarter, more or less the same figure it seems to report every quarter.
Rivian does have a new model waiting in the wings, as it is about to start shipping its cheaper R2 SUV, which should boost sales. The company is banking on the R2 being hugely successful out of the gate, despite the fact that the cheapest version of it won’t arrive until late 2027.
Tesla doesn’t have a new, mass-market vehicle ready to go. The company had been working on a much lower-cost EV that was expected to be priced around $25,000. But CEO Elon Musk killed the project in favor of going all-in on the “CyberCab.” In place of that $25,000 car, Musk instead had Tesla develop the stripped-down Model Y and Model 3.
Techcrunch event
Advertisement
San Francisco, CA | October 13-15, 2026
The only truly new model Tesla has released over the last few years is the Cybertruck. While that outsells most other all-electric trucks, it’s been a complete flop in the face of Tesla’s — and Musk’s — expectations for the steel-clad EV. In the first quarter of this year, Tesla only sold 16,130 “other models,” which includes the Cybertruck and the now-retired Model S and Model X.
Fraud operations have expanded beyond traditional hacking techniques to include methods that exploit legitimate services and real-world infrastructure. By combining publicly available data, weak identity verification processes, and operational gaps, threat actors are building scalable fraud workflows that are both low-cost and difficult to detect.
A tutorial shared in a fraud-focused chat group and analyzed by Flare analysts provides step-by-step guidance on how to identify and exploit vacant residential properties to intercept sensitive mail, revealing a low-tech but highly effective method for enabling identity theft and financial fraud.
Unlike traditional cybercrime techniques that rely on malware, phishing kits, or network intrusions, the method outlined in this article focuses almost entirely on abusing legitimate services and physical-world logistics.
The approach blends open-source intelligence, postal service features, and fake identity fraud into a coordinated workflow designed to gain persistent access to victims’ mail.
Advertisement
A “drop address” tutorial circulated on Telegram
Turning vacant properties into fraud infrastructure
The tutorial begins with identifying so-called “drop addresses”, real residential properties that are temporarily unoccupied and can be used to receive mail without immediately alerting the rightful occupants.
Threat actors are instructed to search real estate platforms such as Zillow, Rightmove, or Zoopla, filtering for recently listed rental properties. By focusing on newly available listings, attackers increase the likelihood that the property is vacant or between tenants.
The guidance further suggests reviewing older listings to identify homes that have remained unoccupied for extended periods, increasing their reliability as drop locations.
In some cases, threat actors even recommend physically maintaining abandoned properties to make them appear occupied, reducing the risk of drawing attention while using the address for fraudulent purposes.
Threat actors share fraud playbooks, stolen credentials, and fake document services across dark web forums and Telegram channels.
Advertisement
Flare monitors these sources continuously, so you can detect exposure before it enables account takeovers, mail fraud, or identity theft.
Monitoring incoming mail to identify valuable targets
Once a suitable address is identified, the next phase involves utilizing legitimate digitalized postal services for discovery and monitoring of incoming mail.
Informed Delivery, for instance, is a free service that provides residential consumers with digital previews of their incoming letter-sized mail and tracks package deliveries.
By registering these services for the selected address, attackers can monitor incoming correspondence remotely, allowing them to identify valuable items such as financial documents, credit cards, or verification letters before physically accessing the mailbox.
Advertisement
This transforms mail delivery into a form of intelligence gathering, enabling more targeted and efficient fraud.
If the address is already registered, the tutorial references change-of-address requests as a way to regain control over mail delivery. These services are designed for legitimate users relocating their residence and are widely available through postal systems such as USPS.
For example, users can submit a permanent or a temporary Change of Address (COA) request online or in person, enabling mail to be forwarded to a new location for periods ranging from several weeks up to 12 months.
Additional services, such as Premium Forwarding, can consolidate and redirect all incoming mail on a recurring basis.
Advertisement
While these mechanisms include identity verification safeguards such as requiring a small online payment tied to a billing address or presenting a valid photo ID in person, the tutorial suggests that actors perceive these controls as potentially insufficient or inconsistently enforced.
In particular, the ability to submit forwarding requests remotely, combined with the reliance on address-linked verification rather than strong identity binding, may create opportunities for abuse if supporting identity information is compromised or fabricated.
As a result, control over mail delivery may, in some cases, be reassigned without direct interaction with the legitimate resident, turning a service intended for convenience into a potential vector for unauthorized redirection.
At this stage, the operation moves beyond passive targeting and into active monitoring, providing attackers with visibility that significantly increases the success rate of downstream fraud.
Advertisement
Establishing persistence through mail forwarding
After confirming that valuable mail is being delivered, the workflow shifts toward establishing long-term access through mail forwarding services.
Actors are instructed to create personal mailbox accounts that allow them to redirect all incoming mail from the drop address to a separate location under their control.
Because these services typically require identity verification, attackers rely on fake identities, forged documents, or purchased personal data to complete the process.
This marks a critical transition from opportunistic interception to persistent access. Once mail forwarding is in place, attackers no longer need to revisit the physical location, reducing exposure while maintaining continuous access to sensitive information.
Advertisement
The use of fake identities, often involving fabricated personal details or Credit Privacy Numbers (CPNs), demonstrates how this technique integrates with broader fraud ecosystems.
Rather than operating in isolation, drop address abuse becomes one component in a larger pipeline that can support account takeovers, credit fraud, and refund scams.
In practice, these fake identities can be used to register mailbox services, submit forwarding requests, or receive sensitive financial correspondence tied to victim accounts.
This allows actors to bridge the gap between digital compromise and real-world access, enabling them to complete verification steps, intercept authentication materials, or establish new accounts under assumed identities.
Advertisement
As a result, control over a physical address can become an important step in fraud operations that depend on both identity credibility and access to legitimate communication channels.
A hybrid fraud model blending digital and physical layers
The method outlined in the tutorial reflects a broader evolution in fraud operations, where digital intelligence gathering is combined with physical-world manipulation.
In addition to leveraging online platforms and postal services, actors also describe using individuals (sometimes recruited from vulnerable populations) to physically access mailboxes or collect delivered items.
This introduces a human layer into the operation, allowing attackers to outsource risk and further distance themselves from direct involvement.
Advertisement
The activity described in the tutorial reflects a broader rise in mail-enabled fraud documented in recent reporting. According to U.S. Postal Inspection Service–related data, reports of mail theft have increased significantly in recent years, with theft from mail receptacles rising by 139% between 2019 and 2023.
Financially, the impact is substantial, with mail theft schemes linked to hundreds of millions of dollars in suspicious activity tied to check fraud.
At the same time, abuse of postal redirection services, similar to the technique referenced in the tutorial, has also grown, with change-of-address fraud increasing sharply year-over-year. Together, these trends highlight how control over physical mail has become valuable.
At the same time, the tutorial acknowledges operational challenges. Virtual addresses and commonly reused locations are increasingly flagged by financial institutions, suggesting that defenders are beginning to incorporate address-based risk signals into their detection models.
Advertisement
As a result, actors emphasize the importance of finding “clean” residential addresses that have not yet been associated with fraudulent activity.
Together, these elements illustrate a fraud model that is not driven by technical sophistication, but by coordination, adaptability, and the strategic use of legitimate systems.
Not an Isolated Tutorial / Fraud
While this may look like an isolated tutorial, this is part of a broader phenomenon or tutorials on how to find physical drop address, some are for free and others are paid for.
Expanding attack surface beyond traditional cybersecurity controls
The emergence of these techniques underscores a growing challenge for organizations: many of the systems being abused: real estate platforms, postal services, and identity verification processes, exist outside the scope of traditional cybersecurity defenses.
Advertisement
As fraud operations continue to evolve, detection increasingly depends on correlating signals across domains, including address usage patterns, mail forwarding activity, and identity inconsistencies. Without this broader visibility, attacks that rely on legitimate services rather than technical exploits may continue to evade conventional security controls.
You must be logged in to post a comment Login