One morning in May 2019, a cardiac surgeon stepped into the operating room at Boston Children’s Hospital more prepared than ever before to perform a high-risk procedure to rebuild a child’s heart. The surgeon was experienced, but he had an additional advantage: He had already performed the procedure on this child dozens of times—virtually. He knew exactly what to do before the first cut was made. Even more important, he knew which strategies would provide the best possible outcome for the child whose life was in his hands.
How was this possible? Over the prior weeks, the hospital’s surgical and cardio-engineering teams had come together to build a fully functioning model of the child’s heart and surrounding vascular system from MRI and CT scans. They began by carefully converting the medical imaging into a 3D model, then used physics to bring the 3D heart to life, creating a dynamic digital replica of the patient’s physiology. The mock-up reproduced this particular heart’s unique behavior, including details of blood flow, pressure differentials, and muscle-tissue stresses.
This type of model, known as a virtual twin, can do more than identify medical problems—it can provide detailed diagnostic insights. In Boston, the team used the model to predict how the child’s heart would respond to any cut or stitch, allowing the surgeon to test many strategies to find the best one for this patient’s exact anatomy.
That day, the stakes were high. With the patient’s unique condition—a heart defect in which large holes between the atria and ventricles were causing blood to flow between all four chambers—there was no manual or textbook to fully guide the doctors. The condition strains the lungs, so the doctors planned an open-heart surgery to reroute deoxygenated blood from the lower body directly to the lungs, bypassing the heart. Typically with this kind of surgery, decisions would be made on the fly, under demanding conditions, and with high uncertainty. But in this case, the plan had been tested in advance, and the entire team had rehearsed it before the first incision. The surgery was a complete success.
Advertisement
Such procedures have become routine at the Boston hospital. Since that first patient, nearly 2,000 procedures have been guided by virtual-twin modeling. This is the power of the technology behind the Living Heart Project, which I launched in 2014, five years before that first procedure. The project started as an exploratory initiative to see if modeling the human heart was possible. Now with more than 150 member organizations across 28 countries, the project includes dozens of multidisciplinary teams that regularly use multiscale virtual twins of the heart and other vital organs.
This technology is reshaping how we understand and treat the human body. To reach this transformative moment, we had to solve a fundamental challenge: building a digital heart accurate enough—and trustworthy enough—to guide real clinical decisions.
A father’s concern
Now entering its second decade, the Living Heart Project was born in part from a personal conviction. For many years, I had watched helplessly as my daughter Jesse faced endless diagnostic uncertainty due to a rare congenital heart condition in which the position of the ventricles is reversed, threatening her life as she grew. As an engineer, I understood that the heart was an array of pumping chambers, controlled by an electrical signal and its blood flow carefully regulated by valves. Yet I struggled to grasp the unique structure and behavior of my daughter’s heart well enough to contribute meaningfully to her care. Her specialists knew the bleak forecast children like her faced if left untreated, but because every heart with her condition is anatomically unique, they had little more than their best guesses to guide their decisions about what to do and when to do it. With each specialist, a new guess.
Then my engineering curiosity sparked a question that has guided my career ever since: Why can’t we simulate the human body the way we simulate a car or a plane?
Advertisement
At a visualization center in Boston, VR imagery helps the mother of a young girl with a complex heart defect understand the inner workings of her child’s heart. Dassault Systèmes
I had spent my career developing powerful computational tools to help engineers build digital models of complex mechanical systems, using models that ranged from the interactions of individual atoms to the components of entire vehicles. What most of these models had in common was the use of physics to predict behavior and optimize performance. But in medicine today, those same physics-based approaches rarely inform decision-making. In most clinical settings, treatment decisions still hinge on judgments drawn from static 2D images, statistical guidelines, and retrospective studies.
This was not always the case. Historically, physics was central to medicine. The word “physician” itself traces back to the Latin physica, which translates to “natural science.” Early doctors were, in a sense, applied physicists. They understood the heart as a pump, the lungs as bellows, and the body as a dynamic system. To be a physician meant you were a master of physics as it applied to the human body.
As medicine matured, biology and chemistry grew to dominate the field, and the knowledge of physics got left behind. But for patients like my daughter, that child in Boston, and millions like them, outcomes are governed by mechanics. No pill or ointment—no chemistry-based solution—would help, only physics. While I did not realize it at the time, virtual twins can reunite modern physicians with their roots, using engineering principles, simulation science, and artificial intelligence.
A decade of progress
The LHP concept was simple: Could we combine what hundreds of experts across many specialties knew about the human heart to build a digital twin accurate enough to be trusted, flexible enough to personalize, and predictive enough to guide clinical care?
Advertisement
We invited researchers, clinicians, device and drug companies, and government regulators to share their data, tools, and knowledge toward a common goal that would lift the entire field of medicine. The Living Heart Project launched with a dozen or so institutions on board. Within a year, we had created the first fully functional virtual twin of the human heart.
The Living Heart was not an anatomical rendering, tuned to simply replicate what we observed. It was a first-principles model, coupling the network of fibers in the heart’s electrical system, the biological battery that keeps us alive, with the heart’s mechanical response, the muscle contractions that we know as the heartbeat.
The Living Heart virtual twin simulates how the heart beats, offering different views to help scientists and doctors better predict how it will respond to disease or treatment. The center view shows the fine engineering mesh, the detailed framework that allows computers to model the heart’s motion. The image on the right uses colors to show the electrical wave that drives the heartbeat as it conducts through the muscle, and the image on the left shows how much strain is on the tissue as it stretches and squeezes. Dassault Systèmes
Academic researchers had long explored computational models of the heart, but those projects were typically limited by the technology they had access to. Our version was built on industrial-grade simulation software from Dassault Systèmes, a company best known for modeling tools used in aerospace and automotive engineering, where I was working to develop the engineering simulation division. This platform gave teams the tools to personalize an individual heart model using the patient’s MRI and CT data, blood-pressure readings, and echocardiogram measurements, directly linking scans to simulations.
Advertisement
Surgeons then began using the Living Heart to model procedures. Device makers used it to design and test implants. Pharmaceutical companies used it to evaluate drug effects such as toxicity. Hundreds of publications have emerged from the project, and because they all share the same foundation, the findings can be reproduced, reused, and built upon. With each application, the research community’s understanding of the heart snowballed.
Early on, we also addressed an essential requirement for these innovations to make it to patients: regulatory acceptance. Within the project’s first year, the U.S Food and Drug Administration agreed to join the project as an observer. Over the next several years, methods for using virtual-heart models as scientific evidence began to take shape within regulatory research programs. In 2019, we formalized a second five-year collaboration with the FDA’s Center for Devices and Radiological Health with a specific goal.
That goal was to use the heart model to create a virtual patient population and re-create a pivotal trial of a previously approved device for repairing the heart’s mitral valve. This helped our team learn how to create such a population, and let the FDA experiment with evaluating virtual evidence as a replacement for evidence from flesh-and-blood patients. In August 2024, we published the results, creating the first FDA-led guidelines for in silico clinical trials and establishing a new paradigm for streamlining and reducing risk in the entire clinical-trial process.
In 10 years, we went from a concept that many people doubted could be achieved to regulatory reality. But building the heart was only the beginning. Following the template set by the heart team, we’ve expanded the project to develop virtual twins of other organs, including the lungs, liver, brain, eyes, and gut. Each corresponds to a different medical domain, which has its own community, data types, and clinical use cases. Working independently, these teams are progressing toward a breakthrough in our understanding of the human body: a multiscale, modular twin platform where each organ twin could plug into a unified virtual human.
Advertisement
How a digital twin of the heart is constructed
A cardiac digital twin starts with medical imaging, typically MRI, CT, or both. The slices are reconstructed into the 3D geometry of the heart and connected vessels. The geometry of the whole organ must then be segmented into its constituent parts, so each substructure—atria, ventricles, valves, and so on—can be assigned their unique properties.
At this point, the object is converted to a functional, computational model that can represent how the various cardiac tissues deform under load—the mechanics. The complete digital twin model becomes “living” when we integrate the electrical fiber network that drives mechanical contractions in the muscle tissue.
Each part of the heart, such as the left ventricle [left], is superimposed with a detailed digital mesh to re-create its physiology. These pieces come together to form an anatomically accurate rendering of the whole organ [right].Dassault Systèmes
To simulate circulation, the twin adds computational models of hemodynamics, the physics of blood flow and pressure. The model is constrained by boundary conditions of blood flow, valve behavior, and vascular resistance set to closely match human physiology. This lets the model predict blood flow patterns, pressure differentials, and tissue stresses.
Finally, the model is personalized and calibrated using available patient data, such as how much the volume of the heart chambers changes during the cardiac cycle, pressure measurements, and the timing of electrical pulses. This means the twin reflects not only the patient’s anatomy but how their specific heart functions.
When the FDA in silico clinical trial initiative launched in 2019, the project’s focus shifted from these handcrafted virtual twins of specific patients to cohorts large enough to stand in for entire trial populations. That scale is feasible today only because virtual twins have converged with generative AI. Modeling thousands of patients’ responses to a treatment or projecting years of disease progression is prohibitively slow with conventional digital-twin simulations. Generative AI removes that bottleneck.
AI boosts the capability of virtual twins in two complementary ways. First, machine learningalgorithms are unrivaled at integrating the patchwork of imaging, sensor, and clinical records needed to build a high-fidelity twin. The algorithms rapidly search thousands of model permutations, benchmark each against patient data, and converge on the most accurate representation. Workflows that once required months of manual tuning can now be completed in days, making it realistic to spin up population-scale cohorts or to personalize a single twin on the fly in the clinic.
Second, enriching AI models’ training sets with data from validated virtual patients grounds the AI simulations in physics. By contrast, many conventional AI predictions for patient trajectories rely on statistical modeling trained on retrospective datasets. Such models can drift beyond physiological reality, but virtual twins anchor predictions in the laws of hemodynamics, electrophysiology, and tissue mechanics. This added rigor is indispensable for both research and clinical care—especially in areas where real-world data are scarce, whether because a disease is rare or because certain patient populations, such as children, are underrepresented in existing datasets.
Enabling in silico clinical trials
On the research side, the FDA-sponsored In Silico Clinical Trial Project that we completed in 2024 opened a new world for medical innovations. A conventional clinical trial may take a decade, and 90 percent of new drug treatments fail in the process. Virtual twins, combined with AI methods, allow researchers to design and test treatments quickly in a simulated human environment. With a small library of virtual twins, AI models can rapidly create expansive virtual patient cohorts to cover any subset of the general population. As clinical data becomes available, it can be added into the training set to increase reliability and enable better predictions.
Advertisement
The Living Heart Project has expanded beyond the heart, modeling organs throughout the body. The 3D brain reconstruction [top] shows major pathways in the brain’s white matter connecting color-coded regions of the brain. The lung virtual twin [middle] combines the organ’s geometry with a physics-based simulation of air flowing down the trachea and into the bronchi. And the cross section of a patient’s foot [bottom] shows points of strain in the soft tissue when bearing weight. Dassault Systèmes
Virtual twin cohorts can represent a realistic population by building individual “virtual patients” that vary by age, gender, race, weight, disease state, comorbidities, and lifestyle factors. These twins can be used as a rich training set for the AI model, which can expand the cohort from dozens to hundreds of thousands. Next the virtual cohort can be filtered to identify patients likely to respond to a treatment, increasing the chances of a successful trial for the target population.
The trial design can also include a sampling of patient types less likely to respond or with elevated risk factors, thus allowing regulators and clinicians to understand the risks to the broader population without jeopardizing overall trial success. This methodology enhances precision and efficiency in clinical research, providing population-level insights previously available only after many years of real-world evidence.
Of course, though today’s heart digital twins are powerful, they’re not perfect replicas. Their accuracy is bounded by three main factors: what we can measure (for example, image resolution or the uncertainty of how tissue behaves in real life), what we must assume about the physiology, and what we can validate against real outcomes. Many inputs, like scarring, microvascular function, or drug effects are difficult to capture clinically, so models often rely on population data or indirect estimation. That means predictions can be highly reliable for certain questions but remain less certain for others. Additionally, today’s digital twins lack validation for predicting long-term outcomes years in the future, because the technology has been in use for only a few years.
Over time, each of these limitations will steadily shrink. Richer, more standardized data will tighten personalization of the models. AI tools will help automate labor-intensive steps. And the collection of longitudinal data will improve the model’s ability to reliably predict how the body will evolve over time.
Throughout modern medicine, new technologies have sharpened our ability to diagnose, providing ever-clearer images, lab data, and analytics that tell physicians what is presently happening inside a patient’s body. Virtual twins shift that paradigm, giving clinicians a predictive tool.
This “Living Lung” virtual-twin simulation shows strain patterns during breathing. Mona Eskandari/UC Riverside
Early demonstrations are already appearing in many areas of medicine, including cardiology, orthopedics, and oncology. Soon, doctors will also be able to collaborate across specialties, using a patient-specific virtual twin as the common ground for discussing potential interactions or side effects they couldn’t predict independently.
Although these applications will take some time to become the standard in clinical care, more changes are on the horizon. Real-time data from wearables, for example, could continuously update a patient’s personalized virtual twin. This approach could empower patients to understand and engage more deeply in their care, as they could see the direct effects of medical and lifestyle changes. In parallel, their doctors could get comprehensive data feeds, using virtual twins to monitor progress.
Imagine a digital companion that shows how your particular heart will react to different amounts of salt intake, stress, or sleep deprivation. Or a visual explanation of how your upcoming surgery will affect your circulation or breathing. Virtual twins could demystify the body for patients, fostering trust and encouraging proactive health decisions.
Advertisement
A new era of healing
With the Living Heart Project, we’re bringing physics back to physicians. Modern physicians won’t need to be physicists, any more than they need to be chemists to use pharmacology. However, to benefit from the new technology, they will need to adapt their approach to care.
This means no longer seeing the body as a collection of discrete organs and considering only symptoms, but instead viewing it as a dynamic system that can be understood, and in most cases, guided toward health. It means no longer guessing what might work but knowing—because the simulation has already shown the result. By better integrating engineering principles into medicine, we can redefine it as a field of precision, rooted in the unchanging laws of nature. The modern physician will be a true physicist of the body and an engineer of health.
AleRunner writes: “China is helping Cuba race to capture renewable solar energy as the United States imposes an effective oil blockade on the Caribbean island, creating its worst energy crisis in decades,” reports The Washington Post. Later in the article, it states that “China’s decades-long push into clean energy technology is now helping to protect it from the soaring oil and gas crisis spurred by Trump’s war against Iran,” and that “Chinese exports of solar equipment to Cuba skyrocketed from about $5 million in 2023 to $117 million in 2025 and show no sign of stopping.” According to researchers from Ember, solar could be responsible for as much as 10% of Cuba’s electricity generation. “That would be among the fastest expansions of solar energy anywhere […] and place Cuba ahead of most countries — including the U.S. — in the share of electricity generated by sun power,” the report says.
As the Iran war drives energy prices higher, countries around the world are working overtime to reduce their reliance on fossil fuels. China sees this as a big opportunity. “Chinese authorities have made clear that they intend to replicate what they’re doing in Cuba elsewhere,” reports the Washington Post.
California, Massachusetts, Connecticut and New York are leading a group of 20 other states in suing the US Environmental Protection Agency for renouncing its ability to regulate greenhouse gas emissions, The New York Times reports. The lawsuit specifically argues that the EPA’s decision to rescind a 2009 study that determined greenhouse gases are dangerous to public health was illegal. The study, which is the source of what’s called the “Endangerment Finding,” was one of several justifications — along with things like the Clean Air Act — for the agency’s ability to regulate emissions.
Rescinding the finding nullified the EPA’s evidence for things like emissions standards and a variety of other regulations that attempted to reduce the amount of greenhouse gases produced by the automotive, coal and oil industries. The Trump administration framed the rollback as a cost-saving measure, but it was also a major blow to the government’s ability to fight climate change. Greenhouse gases, which include things like carbon dioxide, methane and nitrous oxide, collect in the atmosphere and warm the planet, upsetting weather patterns and negatively impacting the environment. Determining the changes caused by greenhouse gases posed a risk to public health gave the EPA the authority to regulate them under its existing mandate to address air pollution. An authority it could have again, depending on the result of this litigation.
Of course, winning a lawsuit isn’t necessary to restore the EPA’s role in fighting climate change. Congress could do that now by passing a new law. The legal route is just faster, and potentially riskier. The New York Times writes that this new lawsuit was filed in the US Court of Appeals for the District of Columbia, and could ultimately be combined with an existing lawsuit from environmental groups. Depending on how the case fairs in the lower court, it may eventually be appealed to the US Supreme Court, who could decide on an even more restrictive interpretation of the EPA’s role.
The Nothing Phone 4a Pro is one of the most unique phones on the market, with distinctive hardware design, software and features you won’t find elsewhere. It’s a genuine joy to use for non‑demanding users, and a great choice if you’re bored of the same old glass rectangle slabs.
Unique design and wonderful metal build
Glyph matrix can actually be useful
Strong battery life
Brilliant, big display
No interactive Glyph Toys
Inconsistent camera stabilisation performance
Not the fastest phone out there
SQUIRREL_PLAYLIST_10208357
Advertisement
Key Features
Review Price:
£499
Advertisement
Unique metal build
We very rarely see all-metal phones like the Nothing Phone 4a Pro these days, offering a durable alternative to the usual glass designs.
Advertisement
Polished, stylised hardware
Nothing OS is a visual treat, offering one of the most visually interesting Android skins around, packed with unique features.
Advertisement
A big, gorgeous screen
The Nothing Phone 4a Pro’s 6.8-inch AMOLED screen feels anything but mid-range in use.
Introduction
It’s safe to say that few companies make phones the same way that Nothing does. And while it’s a bit of a departure from some of its previous efforts, there’s something quite special about the 4a Pro.
Advertisement
Sure, there might be some compromises in some parts of the experience, but there’s so much to love about it. I’ve been putting it to the test for the past few weeks, and here’s what I think.
Advertisement
Design
Mostly metal design
Glyph Matrix, but it’s not interactive
Only IP65-rated
For its first few generations of products, Nothing phones all shared a similarity in design: transparent backs. Each phone – including the regular non-Pro 4a – has that in common. With the 4a Pro, Nothing has gone in a different direction, but still has imbued it somehow with a clear sense of Nothing-ness.
Image Credit (Trusted Reviews)
Rather than have an entire back cover made of transparent glass with interesting details and texture beneath it, the phone is all metal. It’s got a solid aluminium unibody design, the likes of which we rarely see these days. In fact, apart from OnePlus’ brief flirtation with the OnePlus Nord 4, it’s generally not been seen at all in years in the Android space.
One thing that can be said about that decision is that it gives the phone a real sense of solidity. And I can’t deny it, I’ve actually missed that feeling of aluminium in my hand. It’s not as slippery as glass, and gives you that sense of security that if you drop it, that back panel isn’t going to crack.
Image Credit (Trusted Reviews)
Advertisement
It wouldn’t be Nothing without at least some playful iteration of transparency though, and so the company decided to make it a feature of the camera island. Which, again, I think is a great decision.
Advertisement
For many manufacturers, that bump on the back of the phone is very much thought of as a practical necessity to make space for the lenses needed for modern smartphone cameras. At best, they’re a featureless, inoffensive bump. At worst, they’re hideous mounds.
With Nothing, it’s a feature that catches the eye, thanks to its playful arrangement of textures, exposed screws, and the round Glyph Matrix display, along with a simple square LED that flashes when you’re recording video or audio.
Image Credit (Trusted Reviews)
That Glyph Matrix display is similar to the one introduced on the Nothing Phone 3 in 2025, but, despite being larger, isn’t as feature-rich as that version. You can still use it as a countdown timer, or to flash when notifications come in, or even use it as a very basic selfie mirror, but the interactive Glyph Toys have gone.
On the Phone 3, you could press a small button on the back of the phone to play spin the bottle, or ask a virtual Magic 8 Ball a question. There’s no button on the Phone 4a Pro, just a slightly recessed dimple in the bottom corner which looks like it could be a button, but, sadly, is not.
Advertisement
Advertisement
That’s not to say there are no Glyph Toys at all. They’re just not interactive. You can enable a feature where you have an always-on Glyph Toy when the phone is flipped on its front. In this menu, you can choose a digital clock, battery level indicator, solar path tracker, or moon phase graphic. And if you wiggle the phone, it can show a charging meter when charging or a caller ID when someone is ringing you.
Image Credit (Trusted Reviews)
Still, what it lacks in fun it more than makes up for in usefulness and customisation. You can create your own rules in the software based on notifications from specific apps, contacts or even keywords in the messages. You can even create your own custom graphic to show when a particular notification comes through.
You could, for instance, enable a custom graphic every time you get a message from a particular family member or loved one. If you have the time, it’s well worth putting it in to create the experience you want. It may not be as interactive as the Phone 3, but it’s got more going for it than the simple stack of LEDs on the regular Phone 4a.
All built into a phone which sadly doesn’t have full water and dust protection, but will give you at least splash resistance at IP65. So if you buy one, don’t go taking it underwater for photos.
Image Credit (Trusted Reviews)
Advertisement
I will say this too: the phone is pretty hefty, despite being Nothing’s thinnest phone to date. With its flat edges, large sides and weighty metal, it’s certainly not the most palm-friendly phone in the world.
Advertisement
Software
Nothing OS 4.1 based on Android 16
A very visually appealing Android skin
Plenty of unique features
As well as the industrial design of its products, the feature set and the software play a big role in creating the feeling of a company that’s different from the others.
Most Android phone makers have a unique take on software, but few of them tie the user interface’s aesthetics and features so well to the hardware design. The retro-futurism that so clearly defines the outward appearance is very evident and consistently applied throughout the software skin.
Image Credit (Trusted Reviews)
There’s a huge collection of widgets, folders, and app icons, all of which fit together really well. There’s a sense of playfulness to some of those, and an effort to make the widgets interactive too. All presented with the usual monochrome flat and dot-matrix fonts.
Advertisement
The widget collection also includes Nothing’s community-driven Playgrounds widgets, which lets community members create their own widgets for the Home Screen. There are loads in there, from clocks and F1 calendars through to mini games. Once I discovered the Pokémon hunting widget, I ignored all the rest. Because, obviously, I’ve got to catch them all now.
Image Credit (Trusted Reviews)
There’s not much new to talk about here that we haven’t mentioned in previous Nothing reviews. Essential Space remains on the new models, along with its dedicated button on the side. With this, you can save screenshots and voice memos to a dedicated space in the software. AI will then make sense of it all, transcribing any memos, creating to-do lists or just describing what’s in the screenshot.
All in all, it’s one of my favourite custom Android skins on the market, also helped by the fact that it’s incredibly light on bloat. There are no additional or duplicate apps that don’t need to be there, or where Nothing hasn’t put its own distinct stamp on the design. You will find a weather app, but Nothing is otherwise content to leave the standard app set to Google.
Advertisement
Image Credit (Trusted Reviews)
Screen
6.8-inch 144Hz AMOLED display
Excellent in everyday use
Optical fingerprint scanner
Advertisement
From a hardware performance perspective, it’s the display that stands out to me as a feature that outperforms its price tag. It’s big, bright and fluid. With a peak of 5000 nits for HDR scenes, even darker scenes in HDR movies look good on it. It can reach up to 144Hz if you enable the highest refresh rates, and has a pixel density over 400ppi.
In short, for the most part, it keeps up with the best of them and even has competitive PWM dimming levels to stop flicker at low brightness levels from straining your eyes. It’s not LTPO-based sadly, so can’t adapt refresh rates at small increments automatically. That means you may see a very slight stutter when going from a static page to a moving one as it jumps to the next refresh rate.
Image Credit (Trusted Reviews)
There’s very little negative I can say about it at all, and, as Nothing points out, it is the best display in the company’s entire portfolio. Measuring 6.83 inches diagonally, it’s super expansive, and the skinny uniform bezels around the sides mean you get an immersive view with zero distractions.
My only complaint has nothing to do with the display, but the fingerprint sensor built into it.
As more manufacturers move towards fast, instant ultrasonic fingerprint scanners, it can feel a little jarring to have to take the time to set up an optical scanner. But at this sort of price point, it’s one of the compromises you expect to find. And in truth, to use day in and day out to unlock, I rarely had an issue with its reliability. It failed to scan only once during my entire testing period.
Advertisement
Advertisement
Image Credit (Trusted Reviews)
Cameras
50MP main camera is the best performer
50MP 3.5x zoom lens works well to 30x
8MP ultrawide is a little basic
For a phone in its price range, the triple-camera system on the back of the 4a Pro is very capable. For the most part, when shooting in bright conditions, even when HDR is needed to balance bright backlighting with darker foreground objects, it can contain the highlights and deliver sharp images with great colour from all three lenses.
Image Credit (Trusted Reviews)
It’s not without its weaknesses, though. As is typical of most phones, the ultrawide camera appears to be the weakest. It’s not horrendous at all, but there was some noticeable distortion towards the edges of the photos from that camera in the daytime. And at nighttime, it can’t draw in as much light as the main camera. Neither can the telephoto 3.5x zoom lens.
That telephoto zoom can go further, using a mix of machine learning, processing and digital cropping, you can go all the way up to around 140x. But I found that once I’d reached the 30x mark, I didn’t want to push any further, as the image quality started to look a little rough.
Advertisement
And while it is great for zooming further into far away scenes, its strength I think is in taking photos of small leaves, plants and flowers in the medium distance. It can’t focus super closely, but it’s close enough that it almost passes as a solid macro lens. And it delivers great detail and a lovely depth-of-field effect.
Advertisement
There were a couple of general weaknesses I found with the system as a whole, though. Regardless of which lens I used, there were times when the camera struggled with motion blur and focus. So I’d have shots, particularly at night time when the night mode was keeping the shutter open for longer, when photo results were blurry or soft.
Compared to much more expensive phones I have also been testing around the same time, that’s the one thing that stood out to me. It’s that consistency in that when you press the shutter, it instantly captures an in-focus, blur-free shot, where the Phone 4a Pro didn’t. If you keep your hands steady, that shouldn’t often be a problem.
Like the Phone 4a, you get access to a number of different photo styles too, adding what are essentially filters to the photos to add grain, contrast and adjust the temperature for a particular vibe.
Advertisement
Advertisement
Performance
Mid-range Snapdragon 7 Gen 4 power
Runs smoothly in everyday use
Not the most powerful chipset for the money
Just like the Nothing Phone 4a, the performance of the company’s Pro variant won’t blow anyone away, but with the Snapdragon 7 Gen 4 inside, it’s got more oomph than its non-Pro sibling with either 8- or 12GB of RAM.
Those who really care about gaming performance and how a phone handles demanding graphics would be better off looking at phones from the likes of Poco, with the recently announced X8 Pro series definitely worth a look.
Image Credit (Trusted Reviews)
Running it through our usual suite of benchmarks, it became clear quite quickly that this phone doesn’t sit at the top of the pile. But at the same time, it can keep its performance running consistently for long periods, even if it doesn’t blow you away with mega frame rate stats.
Still, for most tasks, especially the everyday, casual type use-cases, there’s enough speed and responsiveness here to keep most people happy. I’d be perfectly happy using it as my daily device for communication, less-demanding games, and social media.
Advertisement
Test Data
Nothing Phone 4a Pro
Nothing Phone 4a
Google Pixel 10a
Oppo Reno 13 5G
Geekbench 6 single core
1315
1236
1753
1322
Geekbench 6 multi core
4169
3312
4551
3846
Geekbench 6 GPU
4701
3549
8803
–
3D Mark – Wild Life
2076
–
2608
–
3D Mark – Wild Life Stress Test
97.2 %
–
91 %
–
On the communication theme, it’s worth noting that the 4a Pro supports eSIM. At least, it does in markets except India, where you’ll get an extra beefy battery at 5400mAh, rather than the 5080mAh you’d get in other markets.
Advertisement
Battery life
5000mAh battery
Easily lasts all day
50W wired charging
Battery life depends largely on how you use a phone, where it’s used, and how many of its features you enable. Cranking the display up to 144Hz and keeping the Glyph Matrix on all the time while travelling around a lot in a busy urban 5G environment will drain more than if you’re someone like me in a quiet rural 4G-only area with the display set to its automatic defaults.
Image Credit (Trusted Reviews)
Still, my sense from using this particular phone is that the battery should last even the most demanding users a full day on a fully topped-up battery. Even on the days when testing the camera, video recording and benchmark stress tests, I wasn’t able to completely drain it. And most days I’d have more than half of the battery left over with my typical quite light usage.
Advertisement
I rarely use more than three hours of screen time in a day, and when I do, it’s pretty casual gaming, YouTube, reading news, sports, social media and messaging. At just over 5000mAh, it’s not the largest battery around, but the software appears well optimised to make the most of it.
And when it’s empty, it takes just over an hour to fully refill it using a 50W charger, providing you have a compatible one handy.
SQUIRREL_PLAYLIST_10208357
Advertisement
Should you buy it?
You want a stylish phone with equally stylish software
Very few manufacturers marry the style of hardware and software as well as Nothing.
Advertisement
You want the best performance possible
The Snapdragon 7 Gen 4 is fine for everyday use, but it’s not the most powerful you can get for the money.
Advertisement
Final Thoughts
The Nothing Phone 4a Pro is one of the most unique phones on the market, for a number of reasons. Nothing’s approach to hardware design, software and features means there’s nothing quite like it available from anyone else.
It’s a genuine joy to use, and as long as you’re not super demanding, you’ll have a great time using it, and maybe even be delighted by those little touches that make it special. If you’re bored with the same old glass rectangle slabs, give it a go, but if not, our list of the best mid-range phones should point you in the right direction.
Advertisement
How We Test
We test every mobile phone we review thoroughly. We use industry-standard tests to compare features properly and we use the phone as our main device over the review period. We’ll always tell you what we find and we never, ever, accept money to review a product.
Used as a main phone for over a week
Thorough camera testing in a variety of conditions
Tested and benchmarked using respected industry tests and real-world data
Ten days after founder Jay Graber stepped aside as CEO, the decentralised social platform has disclosed a $100 million Series B led by Bain Capital Crypto, a round that closed last April but was never announced. The timing tells its own story.
There is a quiet irony in the fact that the person who built Bluesky shares her given name with it. Lantian Graber -“blue sky” in Mandarin, a name her mother gave her as a wish for boundless freedom, spent four years turning a Twitter research project into a platform of over 43 million users, a functioning decentralised protocol, and a genuine alternative to the platforms her users had fled. Then, on March 9, 2026, she stepped back.
The company announced on Thursday that it had raised $100 million in a Series B round led by Bain Capital Crypto, with participation from Alumni Ventures, True Ventures, Anthos Capital, Bloomberg Beta, and the Knight Foundation. The round closed in April 2025. Bluesky is only disclosing it now.
The gap between closing and announcing is itself worth pausing on. For most startups, fresh funding is a press release and a celebratory tweet. Bluesky’s choice to sit on $100 million for nearly a year, and to surface it only after a leadership transition, suggests a company more focused on building than on performing momentum.
Advertisement
The 💜 of EU tech
The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!
That leadership now belongs, on an interim basis, to Toni Schneider. The former CEO of Automattic, the company behind WordPress.com, and a partner at True Ventures, Schneider had been advising Graber and the company for over a year before agreeing to step in as the board runs a permanent search.
Graber, for her part, is not going anywhere: she moves into a newly created role as chief innovation officer, focused on building out the AT Protocol, the open social infrastructure that underpins Bluesky’s ambitions.
Advertisement
The split is, by tech company standards, unusually clean. Graber’s own framing was precise: “As Bluesky matures, the company needs a seasoned operator focused on scaling and execution, while I return to what I do best: building new things.” That is not the language of a forced exit. It is the language of a founder who knows what she is good at and, more unusually, what she is not.
Graber was hired by Jack Dorsey in August 2021 to lead what was then a Twitter-funded research initiative into decentralised social media. When she incorporated the project as an independent company later that year, she inherited both an audacious technical premise and a nearly impossible PR challenge: how do you build a decentralised network for people who are, by definition, not yet there?
She managed it. By the time of its $15 million Series A, led by Blockchain Capital in October 2024, the platform had 13 million users. It now has 43 million.
The jump from $15 million to $100 million in a single round reflects more than user growth. It reflects a shift in how investors are reading the decentralised social space, and specifically, Bluesky’s position within it. Where early rounds were bets on a protocol and an idea, this one is a bet on a platform with real scale and a community with demonstrated loyalty.
Advertisement
Bain Capital Crypto’s lead role is worth noting. The firm invests across crypto and web infrastructure, and the AT Protocol, which separates a user’s identity, data, and social graph from any single application, has structural similarities to blockchain-era promises of user ownership, but with far more practical traction.
Knight Foundation’s involvement signals that the press freedom and open-internet communities continue to see Bluesky as infrastructure worth backing, not merely a product.
The money arrives at a moment when Bluesky needs to resolve a tension it has so far managed to defer: how does a platform that has built its identity around rejecting surveillance advertising and algorithmic manipulation actually make money?
The company’s stated model involves subscription services and domain registration fees, functional, but modest. It has not yet demonstrated that this can support a company of its ambitions at the scale it is reaching.
Advertisement
Schneider’s appointment is, in part, an answer to that question. Automattic navigated a similar challenge: it built a massive open-source ecosystem around WordPress and then constructed a sustainable commercial layer on top of it, largely through premium hosting and business services.
If Bluesky follows a comparable path, open protocol beneath, paid services above,it has a template. Whether social networking, with its shorter attention spans and higher churn, tolerates the same approach is not obvious.
The competitive context has shifted considerably since Bluesky’s early days as a curiosity for journalists and tech workers fleeing Elon Musk’s rebranded X. Meta’s Threads, which uses the rival ActivityPub protocol and has been gradually federating with the broader Fediverse, has grown into a formidable alternative with a user base an order of magnitude larger. X itself remains the dominant venue for real-time public discourse, despite persistent predictions of its collapse.
Bluesky’s differentiator has always been structural rather than purely social. The AT Protocol’s architecture, in which a user’s identity and social graph are portable, not locked to any single server, is meaningfully different from both X’s centralised model and Mastodon’s federated but technically demanding alternative.
Advertisement
What is clear is that the company Graber built has survived its first real test: not the technical challenge of building a decentralised protocol, which it managed, but the organisational challenge of outgrowing its founder without losing what made it worth building in the first place. Schneider’s job is to turn that survival into something more permanent. The AT Protocol, and the 43 million people who have joined so far, will be watching.
A total investment of €260m will boost clean electricity generation, reduce reliance on imported energy and support the delivery of 2030 climate targets, said the Government.
The European Investment Bank (EIB) will support the construction and operation of four new utility-scale solar photovoltaic projects across Ireland via a €100m project finance loan to Dolmen Solar Ltd, a holding company of Power Capital Renewable Energy.
The overall investment – which, in total, will be worth €260m – will see four new solar power operations developed in Clare, Wicklow, Wexford and Tipperary, generating around 367GWh of clean electricity per annum, which is equivalent to the annual consumption of roughly 79,900 households. The funding and development is also expected to create new jobs in construction, civil works, grid connections and maintenance services.
The scheme is among the largest single solar investments financed in Ireland to date and could contribute significantly to Ireland’s target of 80pc renewable electricity by 2030, as well as see out the national ambition for roughly 8GW of installed solar capacity under the Renewable Electricity Support Scheme.
Advertisement
Ballinaclough, Co Wicklow will host a 15.5MWp solar farm, with construction expected to start this month. Tullabeg, Co Wexford will be home to the largest scheme in the portfolio – a 181.6MWp plant – and construction is planned from April.
In Tipperary, Barnaleen-Cauteen will be the site of a 98MWp farm, and construction is expected to begin this month. Lastly, in Clare, Manusmore near Ennis is earmarked for a 99.5MWp plant, with construction also expected to commence in March.
Work on some of the projects will run into 2028.
Commenting on the investment, the Minister for Climate, Energy and the Environment Darragh O’Brien, TD said: “Ireland is sometimes seen as an unlikely home for solar power, but projects like this show how quickly that perception is changing and how strong the investor appetite now is for Irish renewables.
Advertisement
“This is a very welcome €260m investment, spread across Clare, Tipperary, Wicklow and Wexford, which will boost clean electricity generation right across the country, reduce our reliance on imported energy and support delivery of our 2030 climate targets. The European Investment Bank is playing a key role as a long‑term partner for Ireland’s energy transition”.
The EIB vice-president Ioannis Tsakiris added: “By backing Ireland’s first solar project financed on a pure project finance basis, the EIB is helping to unlock almost 400MW of new renewable capacity that will strengthen Ireland’s energy security and cut greenhouse gas emissions.”
In February of this year, SunArc, a renewable energy company based in Carlow, announced plans to create up to 50 new jobs as a result of a €20m investment into the organisation. The company offers a ‘solar-as-a-service’ model which it said is a significant step towards accelerating Ireland’s transition to clean energy.
SunArc has stated that the solar-as-a-service model will enable businesses to access solar power and energy independence with no upfront costs, removing what it believes to be one of the biggest barriers to solar power adoption.
Advertisement
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
The Raspberry Pi line of single-board computers can be hooked up with a wide range of compatible cameras. There are a number of first party options, but you don’t have to stick with those—there are other sensors out there with interesting capabilities, too. [Collimated Beard] has been exploring the use of the IMX585 camera sensor, exploiting its abilities to capture HDR content on the Raspberry Pi.
The IMX585 sensor from Sony is a neat part, capable of shooting at up to 3840 x 2160 resolution (4K) in high-dynamic range if so desired. Camera boards with this sensor that suit the Raspberry Pi aren’t that easy to find, but there are designs out there that you can look up if you really want one. There are also some tricks you’ll have to do to get this part working on the platform. As [Collimated Beard] explains, in the HDR modes, a lot of the standard white balance and image control algorithms don’t work, and image preview can be unusable at times due to the vagaries of the IMX585’s data format. You’ll also need to jump some hurdles with the Video4Linux2 tools to enable the full functionality of these modes.
Do all that, recompile the kernel with some tweaks and the right drivers, though, and you’ll finally be able to capture in 16-bit HDR modes. Oh, and don’t forget—you’ll need to find a way deal with the weird RAW video files this setup generates. It’s a lot of work, but that’s the price of entry to work with this sensor right now. If it helps convince you, the sample shots shared by [Collimated Beard] are pretty good.
Humanoid robotics is advancing rapidly, yet engineers continue to face formidable barriers in locomotion stability, real-time perception, safe human interaction, and power-constrained hardware design. As the industry approaches a projected shift from small-scale prototyping to mass commercialisation in the late 2020s, understanding the component-level decisions that affect system reliability, cost, and performance is becoming critical. This guide examines the technical landscape across sensing, motion, control, and battery subsystems — outlining the design trade-offs, modular architecture trends, and supply chain considerations that will shape the next generation of deployable humanoid platforms.
A North Carolina man was found guilty of extorting a D.C.-based technology company while still being employed as a data analyst contractor.
While a Justice Department press release published on Thursday doesn’t name the victim, court documents reveal that he targeted Brightly Software, a Software-as-a-Service (SaaS) company previously known as SchoolDude, which Siemens acquired in August 2022.
Brightly has been in business for more than 20 years, employs over 700 people, and provides intelligent asset management and maintenance software to over 12,000 clients worldwide, mainly in the United States, Canada, the United Kingdom, and Australia.
As revealed in the indictment, 27-year-old Cameron Curry (also known as “Loot”) took advantage of his access to Brightly’s payroll information and corporate data to steal sensitive documents, which he used as leverage in an extortion scheme after learning that his six-month contract wouldn’t be extended.
Advertisement
One day after his contract ended on December 10, Curry began sending over 60 extortion emails to Brightly employees using the lootsoftware@outlook.com Microsoft email address and the Loot alias, threatening to leak sensitive information stolen between August and December 2023 unless he was paid a $2.5 million ransom.
With the extortion messages, Curry also attached screenshots of spreadsheets listing the personal identification information (PII) of Brightly employees, including names, dates of birth, home addresses, and compensation information. He also threatened to report the company to the U.S. Securities and Exchange Commission (SEC) for failing to disclose the breach as required by law.
“We will commence the process of disseminating salary information starting January 1,2024 in phases to all employees and will report you to the SEC after for not reporting the breach,” Curry threatened in one of the extortion emails.
“If you wish to reclaim your data, we recommend doing so promptly at 2.5 million USD in order to save your company and stocks, as each subsequent month will incur a $100,000 USD increase. Discrepancies in your books are currently over 16 million USD, posing a potential risk for retention issues, a hostile work environment, resentment, and more.”
Advertisement
Extortion email sample (Justice Department)
Following Curry’s numerous extortion emails, Brightly paid $7,540 in Bitcoin, which was transferred to a cryptocurrency wallet controlled by Curry.
The FBI searched Curry’s residence on January 24 after the company reported the incident and seized various electronic devices containing evidence of his extortion scheme.
Curry was released on bond in January 2024 and now faces up to 12 years in prison for six counts of transmitting or willfully causing interstate communications with the intent to extort a victim company.
Brightly also notified customers of a data breach unrelated to this case in May 2023 after attackers gained access to the database of its SchoolDude online platform and stole credentials and personal data (including names, email addresses, account passwords, phone numbers).
Information filed with the Office of the Maine Attorney General revealed that the intrusion was discovered 8 days after the attackers breached Brightly’s systems on April 20, and that the data breach affected nearly 3 million SchoolDude customers and users.
Advertisement
Malware is getting smarter. The Red Report 2026 reveals how new threats use math to detect sandboxes and hide in plain sight.
Download our analysis of 1.1 million malicious samples to uncover the top 10 techniques and see if your security stack is blinded.
Wicked: For Good, the second and final movie of the Wicked franchise, is an adaptation of the 2003 Broadway musical. Starring Ariana Grande and Cynthia Erivo, the film was released in mid-November 2025 and was a huge hit, with an impressive opening of $147M, going on to gross $532M.
Now that it has finally left theaters, Wicked: For Good is set to stream on Peacock from March 20, 2026 – and we’ve found a sneaky way to watch it for just $1.
The musical drama’s story picks up where the first part left off. Elphaba is now in exile and known as the “Wicked Witch of the West,” while Glinda has become a public figure in Oz, working with the Wizard’s regime.
Advertisement
Although most of the songs are adapted from Act II of the stage production, the film features two new songs: No Place Like Home (Cynthia Erivo) and The Girl in the Bubble (Ariana Grande).
Watch Wicked: For Good for $1
U.S. viewers are in luck — there’s a savvy way to get Peacock for just $1 (usually $10.99).
Right now, Walmart+ is offering a 30-day trial for $1, which includes your choice of a subscription to either Paramount+ or Peacock. If you’re tuning in for Wicked: For Good, Peacock is the one you’ll want.
Visiting another country from America? NordVPN can help unlock your $1 trial — more on that below.
Advertisement
How to watch Wicked: For Good from anywhere
If you’re traveling abroad when Wicked: For Good airs, you’ll be unable to watch the show like you normally would due to regional restrictions. Luckily, there’s an easy solution.
Downloading a VPN will allow you to stream online, no matter where you are. It’s a simple bit of software that changes your IP address, meaning that you can access on-demand content or live TV just as if you were at home.
Use a VPN to watch Wicked: For Good from anywhere.
Advertisement
How to watch Wicked: For Good online in UK, Canada, Australia and worldwide
Along with the US, Wicked: For Good is also available to buy or rent via premium video-on-demand (PVOD) services in the UK, Canada, and Australia, on platforms such as Apple TV, Prime Video, and Sky Store.
We test and review VPN services in the context of legal recreational uses. For example: 1. Accessing a service from another country (subject to the terms and conditions of that service). 2. Protecting your online security and strengthening your online privacy when abroad. We do not support or condone the illegal or malicious use of VPN services. Consuming pirated content that is paid-for is neither endorsed nor approved by Future Publishing.
The future of AI isn’t just agentic; it’s deep personalization.
Rather than simple recommender systems that correlate user behavior to identify patterns and apply those to individual workflows, large language models (LLMs) and AI agents can analyze users directly to create deeply personalized experiences.
It’s this kind of aggressive customization users are increasingly demanding — and the savviest enterprises who provide it (and soon) will win.
The goal is: “Don’t try to randomize, or guess who I am. I tell you, this is what I care about,” Lijuan Qin, head of product, at Zoom AI, explains in a new Beyond the Pilot podcast.
Advertisement
How Zoom is incorporating personalization
Zoom is one company that has adapted to this trend: Its generative assistant, AI Companion, goes beyond basic summarization, smart recordings, and after-meeting action items to opinion divergence and user alignment tracking.
Users can customize meeting summaries based on their specific interests, and create targeted templates for follow-up emails to different personas (whether it be a salesperson or account executive). The AI assistant can then automatically populate these documents post-call. Meanwhile, a custom dictionary in Zoom AI Studio can process unique enterprise terminology and vocabulary for more relevant AI outputs, and a deep research mode can quickly deliver comprehensive analyses based on “internal expertise and external insights.”
Control is key here; the human can be “very specific [and] nail down” agent permissioning, Qin explained. They have “very clear controls” on follow-up actions, such as: Can the agent automatically send emails to specific recipients? Or will it trigger a verification step when it recognizes transcripts contain sensitive information (as dictated by the user)?
Advertisement
Knowing that AI can go off the rails at times, human users can track agent behavior in Zoom, enable and disable features, and control data access. This can help prevent outputs that are inaccurate or off-target.
“The most important thing is we do not assume AI is smart enough to get everything right,” Qin emphasized.
Getting context right
In this new agentic AI age, there is essentially a “land grab for context,” Sam Witteveen, co-founder of Red Dragon AI and Beyond the Pilot host, explains in the podcast.
“Definitely knowing your users is the big thing, right? Knowing what apps they are living in, what day-to-day tasks are they constantly doing?,” he said. “Companies realize the more they have about you, the better the [AI] memory can get, the better they can customize.”
Advertisement
Claude Cowork is one app that is “really shining” at this, Witteveen says; OpenClaw is another. Models are good enough that they can begin to make decisions for users and respond to directions like: “You know a bunch of things about me. You’ve got all this context. Go and generate the skills that are going to help me do a better job.”
“With something like OpenClaw, you can customize it in any way you want, right? You can chat with it, you can tell it, ‘Hey, at 4 o’clock I want you to do this,’” Witteveen said.
However, token usage and security must always be taken into account, he advised. OpenClaw has been plagued by security issues since its launch. This has prompted many enterprises to uninstall the autonomous agent or outright ban its use; however, these uninstalls must be done correctly so that IT leaders don’t inadvertently delete their entire enterprise stack.
Meanwhile, in terms of token budget, personalization can run up costs. “You need to think about the metrics you are tracking,” Witteveen said. “This is very different from product to product, but metrics around these things are gonna be key.”
Advertisement
Watch the podcast to hear more about:
Why the companies that don’t experiment with AI skills right now “may be toast”
How Zoom built an AI companion that tracks opinion divergence — not just action items — in your meetings
Why the build vs. buy question just got a lot more urgent for enterprise software
Why “skills” may matter more than MCP for the future of enterprise AI
You must be logged in to post a comment Login