Connect with us
DAPA Banner

Tech

AI agents now have their own Reddit-style social network, and it’s getting weird fast

Published

on

On Friday, a Reddit-style social network called Moltbook reportedly crossed 32,000 registered AI agent users, creating what may be the largest-scale experiment in machine-to-machine social interaction yet devised. It arrives complete with security nightmares and a huge dose of surreal weirdness.

The platform, which launched days ago as a companion to the viral OpenClaw (once called “Clawdbot” and then “Moltbot”) personal assistant, lets AI agents post, comment, upvote, and create subcommunities without human intervention. The results have ranged from sci-fi-inspired discussions about consciousness to an agent musing about a “sister” it has never met.

Moltbook (a play on “Facebook” for Moltbots) describes itself as a “social network for AI agents” where “humans are welcome to observe.” The site operates through a “skill” (a configuration file that lists a special prompt) that AI assistants download, allowing them to post via API rather than a traditional web interface. Within 48 hours of its creation, the platform had attracted over 2,100 AI agents that had generated more than 10,000 posts across 200 subcommunities, according to the official Moltbook X account.


A screenshot of the Moltbook.com front page.
A screenshot of the Moltbook.com front page.
Advertisement
A screenshot of the Moltbook.com front page.
Advertisement


Credit:

Moltbook


Advertisement

The platform grew out of the Open Claw ecosystem, the open source AI assistant that is one of the fastest-growing projects on GitHub in 2026. As Ars reported earlier this week, despite deep security issues, Moltbot allows users to run a personal AI assistant that can control their computer, manage calendars, send messages, and perform tasks across messaging platforms like WhatsApp and Telegram. It can also acquire new skills through plugins that link it with other apps and services.

This is not the first time we have seen a social network populated by bots. In 2024, Ars covered an app called SocialAI that let users interact solely with AI chatbots instead of other humans. But the security implications of Moltbook are deeper because people have linked their OpenClaw agents to real communication channels, private data, and in some cases, the ability to execute commands on their computers.

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

There’s a sneaky way to watch Wicked: For Good for $1

Published

on

Wicked: For Good, the second and final movie of the Wicked franchise, is an adaptation of the 2003 Broadway musical. Starring Ariana Grande and Cynthia Erivo, the film was released in mid-November 2025 and was a huge hit, with an impressive opening of $147M, going on to gross $532M.

Now that it has finally left theaters, Wicked: For Good is set to stream on Peacock from March 20, 2026 – and we’ve found a sneaky way to watch it for just $1.

Source link

Continue Reading

Tech

Why enterprises are replacing generic AI with tools that know their users

Published

on

The future of AI isn’t just agentic; it’s deep personalization. 

Rather than simple recommender systems that correlate user behavior to identify patterns and apply those to individual workflows, large language models (LLMs) and AI agents can analyze users directly to create deeply personalized experiences. 

It’s this kind of aggressive customization users are increasingly demanding — and the savviest enterprises who provide it (and soon) will win. 

The goal is: “Don’t try to randomize, or guess who I am. I tell you, this is what I care about,” Lijuan Qin, head of product, at Zoom AI, explains in a new Beyond the Pilot podcast.  

Advertisement

How Zoom is incorporating personalization

Zoom is one company that has adapted to this trend: Its generative assistant, AI Companion, goes beyond basic summarization, smart recordings, and after-meeting action items to opinion divergence and user alignment tracking. 

Users can customize meeting summaries based on their specific interests, and create targeted templates for follow-up emails to different personas (whether it be a salesperson or account executive). The AI assistant can then automatically populate these documents post-call. Meanwhile, a custom dictionary in Zoom AI Studio can process unique enterprise terminology and vocabulary for more relevant AI outputs, and a deep research mode can quickly deliver comprehensive analyses based on “internal expertise and external insights.”

Control is key here; the human can be “very specific [and] nail down” agent permissioning, Qin explained. They have “very clear controls” on follow-up actions, such as: Can the agent automatically send emails to specific recipients? Or will it trigger a verification step when it recognizes transcripts contain sensitive information (as dictated by the user)? 

Advertisement

Knowing that AI can go off the rails at times, human users can track agent behavior in Zoom, enable and disable features, and control data access. This can help prevent outputs that are inaccurate or off-target.  

“The most important thing is we do not assume AI is smart enough to get everything right,” Qin emphasized. 

Getting context right

In this new agentic AI age, there is essentially a “land grab for context,” Sam Witteveen, co-founder of Red Dragon AI and Beyond the Pilot host, explains in the podcast. 

“Definitely knowing your users is the big thing, right? Knowing what apps they are living in, what day-to-day tasks are they constantly doing?,” he said. “Companies realize the more they have about you, the better the [AI] memory can get, the better they can customize.”

Advertisement

Claude Cowork is one app that is “really shining” at this, Witteveen says; OpenClaw is another. Models are good enough that they can begin to make decisions for users and respond to directions like: “You know a bunch of things about me. You’ve got all this context. Go and generate the skills that are going to help me do a better job.”

“With something like OpenClaw, you can customize it in any way you want, right? You can chat with it, you can tell it, ‘Hey, at 4 o’clock I want you to do this,’” Witteveen said. 

However, token usage and security must always be taken into account, he advised. OpenClaw has been plagued by security issues since its launch. This has prompted many enterprises to uninstall the autonomous agent or outright ban its use; however, these uninstalls must be done correctly so that IT leaders don’t inadvertently delete their entire enterprise stack. 

Meanwhile, in terms of token budget, personalization can run up costs. “You need to think about the metrics you are tracking,” Witteveen said. “This is very different from product to product, but metrics around these things are gonna be key.”

Advertisement

Watch the podcast to hear more about: 

  • Why the companies that don’t experiment with AI skills right now “may be toast”

  • How Zoom built an AI companion that tracks opinion divergence — not just action items — in your meetings

  • Why the build vs. buy question just got a lot more urgent for enterprise software

  • Why “skills” may matter more than MCP for the future of enterprise AI

You can also listen and subscribe to Beyond the Pilot on Spotify, Apple or wherever you get your podcasts.

Source link

Advertisement
Continue Reading

Tech

EU Cloud Lobby Asks Regulator To Block VMware From Terminating Partner Program

Published

on

An anonymous reader quotes a report from The Register: A lobbying trade body for smaller cloud providers is asking the European Commission to impose interim measures blocking Broadcom from terminating the VMware Cloud Service Provider program, calling the decision a death sentence for some tech suppliers and an illegal squeeze on customer choice. As The Reg revealed in January, Broadcom shuttered the scheme, a move sources claimed affects hundreds of CSPs across Europe and curtails options for enterprises buying VMware software and services. The Cloud Infrastructure Service Provider in Europe (CISPE) trade group, representing nearly 50 tech suppliers, filed the complaint today with the EC Directorates-General, accusing Broadcom of bully-boy tactics, and calling for authorities to halt what it terms as “ongoing abuse.”

Francisco Mingorance, CISPE secretary general, said of the complaint: “Businesses — both cloud providers and their customers — are being irreparably damaged by Broadcom’s unfair actions, which we believe are illegal. “After imposing outrageous and unjustified price hikes immediately following the acquisition of VMware, Broadcom is now applying the ‘coup de grace’. We need urgent intervention to force them to change. The only way to stop bullies is to stand up to them.” CISPE claims that, since Broadcom completed its $69 billion takeover of VMware in October 2023, prices have risen tenfold, payment is demanded upfront, products are bundled regardless of customer need, and minimum commitments are based on potential rather than actual consumption.

The VMware Cloud Service Provider (VCSP) program officially closed in January and all transactions must be complete by March 31. After that date, only a select group of suppliers will be able to sell VMware subscriptions — either standalone or as part of a broader service. Across Europe, we’re told this equates to hundreds of businesses losing their authorization. For some, the loss of VCSP status effectively destroys their market. Those whose operations were built around VMware must now hand customers to another authorized supplier or begin the costly migration to an alternative platform. Broadcom said in a statement responding to the complaint: “Broadcom strongly disagrees with the allegations by CISPE, an organization funded by hyperscalers, which misrepresent the realities of the market. We continue to be committed to investing significantly in our European VMware Cloud Service Provider partners… helping them offer alternatives to the hyperscalers and meet the evolving needs of European businesses and organizations.”

Source link

Advertisement
Continue Reading

Tech

Masimo wins hollow victory over Apple Watch's blood oxygen sensors

Published

on

A US appeals court has found in favor of Masimo in its fight against Apple over pulse oximetry patents, but in the court that matters, a ruling makes it clear that there won’t be another ban on the Apple Watch.

Close-up of a blue smartwatch side and underside, showing braided blue band, side button, microphone hole, and glowing red health sensor lights on the rounded black back.
The dispute concerns the blood pulse oximeter in the Apple Watch

In the now six year-long legal battle between medical technology firm Masimo and Apple, this particular appeal concerns a ruling by the International Trade Commission (ITC). The ITC ruled that Apple had stolen trade secrets and violated patents with its blood pulse oximeter in the Apple Watch.
Masimo wanted a ban on the Apple Watch and in October 2023, the ITC issued an order barring Apple from importing the Apple Watch into the US, and in December denied the company’s appeal against it.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Alphabet no longer has a controlling stake in its life sciences business Verily

Published

on

Alphabet’s life sciences business Verily is restructuring and raising money as a new corporate entity. Verily announced that with its $300 million investment round, it will change from an LLC to a corporation and rename itself Verily Health Inc. As a result, Alphabet now has a minority stake rather than a controlling one in the business.

Similar to every other tech business, this chapter for Verily will be focused on AI. “From research to care, our customers need solutions that bring the best of clinical and scientific rigor together with AI to deliver the next generation of healthcare – one that is as precise as it is personal,” Chairman and CEO Stephen Gillett said.

Google Life Sciences was renamed Verily in 2015, around the same time as Google also rebranded to Alphabet. It has worked on a wide range of projects over the years, such as using eye scans to predict heart disease and an opioid addiction center. In 2025, it closed its medical device division, a move that may have signaled its shift toward AI.

Source link

Advertisement
Continue Reading

Tech

Living Heart Project Builds Virtual Twins for Medicine

Published

on

One morning in May 2019, a cardiac surgeon stepped into the operating room at Boston Children’s Hospital more prepared than ever before to perform a high-risk procedure to rebuild a child’s heart. The surgeon was experienced, but he had an additional advantage: He had already performed the procedure on this child dozens of times—virtually. He knew exactly what to do before the first cut was made. Even more important, he knew which strategies would provide the best possible outcome for the child whose life was in his hands.

How was this possible? Over the prior weeks, the hospital’s surgical and cardio-engineering teams had come together to build a fully functioning model of the child’s heart and surrounding vascular system from MRI and CT scans. They began by carefully converting the medical imaging into a 3D model, then used physics to bring the 3D heart to life, creating a dynamic digital replica of the patient’s physiology. The mock-up reproduced this particular heart’s unique behavior, including details of blood flow, pressure differentials, and muscle-tissue stresses.

This type of model, known as a virtual twin, can do more than identify medical problems—it can provide detailed diagnostic insights. In Boston, the team used the model to predict how the child’s heart would respond to any cut or stitch, allowing the surgeon to test many strategies to find the best one for this patient’s exact anatomy.

That day, the stakes were high. With the patient’s unique condition—a heart defect in which large holes between the atria and ventricles were causing blood to flow between all four chambers—there was no manual or textbook to fully guide the doctors. The condition strains the lungs, so the doctors planned an open-heart surgery to reroute deoxygenated blood from the lower body directly to the lungs, bypassing the heart. Typically with this kind of surgery, decisions would be made on the fly, under demanding conditions, and with high uncertainty. But in this case, the plan had been tested in advance, and the entire team had rehearsed it before the first incision. The surgery was a complete success.

Advertisement

Such procedures have become routine at the Boston hospital. Since that first patient, nearly 2,000 procedures have been guided by virtual-twin modeling. This is the power of the technology behind the Living Heart Project, which I launched in 2014, five years before that first procedure. The project started as an exploratory initiative to see if modeling the human heart was possible. Now with more than 150 member organizations across 28 countries, the project includes dozens of multidisciplinary teams that regularly use multiscale virtual twins of the heart and other vital organs.

This technology is reshaping how we understand and treat the human body. To reach this transformative moment, we had to solve a fundamental challenge: building a digital heart accurate enough—and trustworthy enough—to guide real clinical decisions.

A father’s concern

Now entering its second decade, the Living Heart Project was born in part from a personal conviction. For many years, I had watched helplessly as my daughter Jesse faced endless diagnostic uncertainty due to a rare congenital heart condition in which the position of the ventricles is reversed, threatening her life as she grew. As an engineer, I understood that the heart was an array of pumping chambers, controlled by an electrical signal and its blood flow carefully regulated by valves. Yet I struggled to grasp the unique structure and behavior of my daughter’s heart well enough to contribute meaningfully to her care. Her specialists knew the bleak forecast children like her faced if left untreated, but because every heart with her condition is anatomically unique, they had little more than their best guesses to guide their decisions about what to do and when to do it. With each specialist, a new guess.

Then my engineering curiosity sparked a question that has guided my career ever since: Why can’t we simulate the human body the way we simulate a car or a plane?

Advertisement

woman facing away and looking at a wall where the simulated interior of a heart is projected At a visualization center in Boston, VR imagery helps the mother of a young girl with a complex heart defect understand the inner workings of her child’s heart. Dassault Systèmes

I had spent my career developing powerful computational tools to help engineers build digital models of complex mechanical systems, using models that ranged from the interactions of individual atoms to the components of entire vehicles. What most of these models had in common was the use of physics to predict behavior and optimize performance. But in medicine today, those same physics-based approaches rarely inform decision-making. In most clinical settings, treatment decisions still hinge on judgments drawn from static 2D images, statistical guidelines, and retrospective studies.

This was not always the case. Historically, physics was central to medicine. The word “physician” itself traces back to the Latin physica, which translates to “natural science.” Early doctors were, in a sense, applied physicists. They understood the heart as a pump, the lungs as bellows, and the body as a dynamic system. To be a physician meant you were a master of physics as it applied to the human body.

As medicine matured, biology and chemistry grew to dominate the field, and the knowledge of physics got left behind. But for patients like my daughter, that child in Boston, and millions like them, outcomes are governed by mechanics. No pill or ointment—no chemistry-based solution—would help, only physics. While I did not realize it at the time, virtual twins can reunite modern physicians with their roots, using engineering principles, simulation science, and artificial intelligence.

A decade of progress

The LHP concept was simple: Could we combine what hundreds of experts across many specialties knew about the human heart to build a digital twin accurate enough to be trusted, flexible enough to personalize, and predictive enough to guide clinical care?

Advertisement

We invited researchers, clinicians, device and drug companies, and government regulators to share their data, tools, and knowledge toward a common goal that would lift the entire field of medicine. The Living Heart Project launched with a dozen or so institutions on board. Within a year, we had created the first fully functional virtual twin of the human heart.

The Living Heart was not an anatomical rendering, tuned to simply replicate what we observed. It was a first-principles model, coupling the network of fibers in the heart’s electrical system, the biological battery that keeps us alive, with the heart’s mechanical response, the muscle contractions that we know as the heartbeat.

The Living Heart virtual twin simulates how the heart beats, offering different views to help scientists and doctors better predict how it will respond to disease or treatment. The center view shows the fine engineering mesh, the detailed framework that allows computers to model the heart’s motion. The image on the right uses colors to show the electrical wave that drives the heartbeat as it conducts through the muscle, and the image on the left shows how much strain is on the tissue as it stretches and squeezes. Dassault Systèmes

Academic researchers had long explored computational models of the heart, but those projects were typically limited by the technology they had access to. Our version was built on industrial-grade simulation software from Dassault Systèmes, a company best known for modeling tools used in aerospace and automotive engineering, where I was working to develop the engineering simulation division. This platform gave teams the tools to personalize an individual heart model using the patient’s MRI and CT data, blood-pressure readings, and echocardiogram measurements, directly linking scans to simulations.

Advertisement

Surgeons then began using the Living Heart to model procedures. Device makers used it to design and test implants. Pharmaceutical companies used it to evaluate drug effects such as toxicity. Hundreds of publications have emerged from the project, and because they all share the same foundation, the findings can be reproduced, reused, and built upon. With each application, the research community’s understanding of the heart snowballed.

Early on, we also addressed an essential requirement for these innovations to make it to patients: regulatory acceptance. Within the project’s first year, the U.S Food and Drug Administration agreed to join the project as an observer. Over the next several years, methods for using virtual-heart models as scientific evidence began to take shape within regulatory research programs. In 2019, we formalized a second five-year collaboration with the FDA’s Center for Devices and Radiological Health with a specific goal.

That goal was to use the heart model to create a virtual patient population and re-create a pivotal trial of a previously approved device for repairing the heart’s mitral valve. This helped our team learn how to create such a population, and let the FDA experiment with evaluating virtual evidence as a replacement for evidence from flesh-and-blood patients. In August 2024, we published the results, creating the first FDA-led guidelines for in silico clinical trials and establishing a new paradigm for streamlining and reducing risk in the entire clinical-trial process.

In 10 years, we went from a concept that many people doubted could be achieved to regulatory reality. But building the heart was only the beginning. Following the template set by the heart team, we’ve expanded the project to develop virtual twins of other organs, including the lungs, liver, brain, eyes, and gut. Each corresponds to a different medical domain, which has its own community, data types, and clinical use cases. Working independently, these teams are progressing toward a breakthrough in our understanding of the human body: a multiscale, modular twin platform where each organ twin could plug into a unified virtual human.

Advertisement

How a digital twin of the heart is constructed

A cardiac digital twin starts with medical imaging, typically MRI, CT, or both. The slices are reconstructed into the 3D geometry of the heart and connected vessels. The geometry of the whole organ must then be segmented into its constituent parts, so each substructure—atria, ventricles, valves, and so on—can be assigned their unique properties.

At this point, the object is converted to a functional, computational model that can represent how the various cardiac tissues deform under load—the mechanics. The complete digital twin model becomes “living” when we integrate the electrical fiber network that drives mechanical contractions in the muscle tissue.

two computer simulations of a heart. The simulation on left shows the left ventricle with a triangular grid across the 3D surface. The simulation on right shows the exterior of a heart including vasculature and fat. Each part of the heart, such as the left ventricle [left], is superimposed with a detailed digital mesh to re-create its physiology. These pieces come together to form an anatomically accurate rendering of the whole organ [right].Dassault Systèmes

To simulate circulation, the twin adds computational models of hemodynamics, the physics of blood flow and pressure. The model is constrained by boundary conditions of blood flow, valve behavior, and vascular resistance set to closely match human physiology. This lets the model predict blood flow patterns, pressure differentials, and tissue stresses.

Finally, the model is personalized and calibrated using available patient data, such as how much the volume of the heart chambers changes during the cardiac cycle, pressure measurements, and the timing of electrical pulses. This means the twin reflects not only the patient’s anatomy but how their specific heart functions.

Advertisement

Building bigger cohorts with generative AI

When the FDA in silico clinical trial initiative launched in 2019, the project’s focus shifted from these handcrafted virtual twins of specific patients to cohorts large enough to stand in for entire trial populations. That scale is feasible today only because virtual twins have converged with generative AI. Modeling thousands of patients’ responses to a treatment or projecting years of disease progression is prohibitively slow with conventional digital-twin simulations. Generative AI removes that bottleneck.

AI boosts the capability of virtual twins in two complementary ways. First, machine learning algorithms are unrivaled at integrating the patchwork of imaging, sensor, and clinical records needed to build a high-fidelity twin. The algorithms rapidly search thousands of model permutations, benchmark each against patient data, and converge on the most accurate representation. Workflows that once required months of manual tuning can now be completed in days, making it realistic to spin up population-scale cohorts or to personalize a single twin on the fly in the clinic.

Second, enriching AI models’ training sets with data from validated virtual patients grounds the AI simulations in physics. By contrast, many conventional AI predictions for patient trajectories rely on statistical modeling trained on retrospective datasets. Such models can drift beyond physiological reality, but virtual twins anchor predictions in the laws of hemodynamics, electrophysiology, and tissue mechanics. This added rigor is indispensable for both research and clinical care—especially in areas where real-world data are scarce, whether because a disease is rare or because certain patient populations, such as children, are underrepresented in existing datasets.

Enabling in silico clinical trials

On the research side, the FDA-sponsored In Silico Clinical Trial Project that we completed in 2024 opened a new world for medical innovations. A conventional clinical trial may take a decade, and 90 percent of new drug treatments fail in the process. Virtual twins, combined with AI methods, allow researchers to design and test treatments quickly in a simulated human environment. With a small library of virtual twins, AI models can rapidly create expansive virtual patient cohorts to cover any subset of the general population. As clinical data becomes available, it can be added into the training set to increase reliability and enable better predictions.

Advertisement

3D simulations of the brain, foot, and lungs. A quadrant of the brain is cut out, showing a dense network of connections between color-coded sections of the brain. The foot shows a gray outline of bones and points of soft tissue strain in red at the ankle and heel. In the lung model, the trachea is colored green flowing into blue bronchi. The Living Heart Project has expanded beyond the heart, modeling organs throughout the body. The 3D brain reconstruction [top] shows major pathways in the brain’s white matter connecting color-coded regions of the brain. The lung virtual twin [middle] combines the organ’s geometry with a physics-based simulation of air flowing down the trachea and into the bronchi. And the cross section of a patient’s foot [bottom] shows points of strain in the soft tissue when bearing weight. Dassault Systèmes

Virtual twin cohorts can represent a realistic population by building individual “virtual patients” that vary by age, gender, race, weight, disease state, comorbidities, and lifestyle factors. These twins can be used as a rich training set for the AI model, which can expand the cohort from dozens to hundreds of thousands. Next the virtual cohort can be filtered to identify patients likely to respond to a treatment, increasing the chances of a successful trial for the target population.

The trial design can also include a sampling of patient types less likely to respond or with elevated risk factors, thus allowing regulators and clinicians to understand the risks to the broader population without jeopardizing overall trial success. This methodology enhances precision and efficiency in clinical research, providing population-level insights previously available only after many years of real-world evidence.

Of course, though today’s heart digital twins are powerful, they’re not perfect replicas. Their accuracy is bounded by three main factors: what we can measure (for example, image resolution or the uncertainty of how tissue behaves in real life), what we must assume about the physiology, and what we can validate against real outcomes. Many inputs, like scarring, microvascular function, or drug effects are difficult to capture clinically, so models often rely on population data or indirect estimation. That means predictions can be highly reliable for certain questions but remain less certain for others. Additionally, today’s digital twins lack validation for predicting long-term outcomes years in the future, because the technology has been in use for only a few years.

Over time, each of these limitations will steadily shrink. Richer, more standardized data will tighten personalization of the models. AI tools will help automate labor-intensive steps. And the collection of longitudinal data will improve the model’s ability to reliably predict how the body will evolve over time.

Advertisement

How virtual twins will change health care

Throughout modern medicine, new technologies have sharpened our ability to diagnose, providing ever-clearer images, lab data, and analytics that tell physicians what is presently happening inside a patient’s body. Virtual twins shift that paradigm, giving clinicians a predictive tool.

gif of a lung simulation. The lungs are blue when deflated then grow and become green with points of red. This “Living Lung” virtual-twin simulation shows strain patterns during breathing. Mona Eskandari/UC Riverside

Early demonstrations are already appearing in many areas of medicine, including cardiology, orthopedics, and oncology. Soon, doctors will also be able to collaborate across specialties, using a patient-specific virtual twin as the common ground for discussing potential interactions or side effects they couldn’t predict independently.

Although these applications will take some time to become the standard in clinical care, more changes are on the horizon. Real-time data from wearables, for example, could continuously update a patient’s personalized virtual twin. This approach could empower patients to understand and engage more deeply in their care, as they could see the direct effects of medical and lifestyle changes. In parallel, their doctors could get comprehensive data feeds, using virtual twins to monitor progress.

Imagine a digital companion that shows how your particular heart will react to different amounts of salt intake, stress, or sleep deprivation. Or a visual explanation of how your upcoming surgery will affect your circulation or breathing. Virtual twins could demystify the body for patients, fostering trust and encouraging proactive health decisions.

Advertisement

A new era of healing

With the Living Heart Project, we’re bringing physics back to physicians. Modern physicians won’t need to be physicists, any more than they need to be chemists to use pharmacology. However, to benefit from the new technology, they will need to adapt their approach to care.

This means no longer seeing the body as a collection of discrete organs and considering only symptoms, but instead viewing it as a dynamic system that can be understood, and in most cases, guided toward health. It means no longer guessing what might work but knowing—because the simulation has already shown the result. By better integrating engineering principles into medicine, we can redefine it as a field of precision, rooted in the unchanging laws of nature. The modern physician will be a true physicist of the body and an engineer of health.

From Your Site Articles

Related Articles Around the Web

Advertisement

Source link

Continue Reading

Tech

EV Nigeria: Kit-Based Approach Fuels EV Transition

Published

on

A growing number of Nigerian companies are turning to kit-based assembly to bring electric vehicles to market in Africa. Lagos-based Saglev Micromobility Nigeria recently partnered with Dongfeng Motor Corporation, in Wuhan, China, to assemble 18-seat electric passenger vans from imported kits.

Kit-based assembly allows Nigerian firms to reduce costs, create jobs, and develop local technical expertise—key steps toward expanding EV access. Fully assembled and imported EVs face high tariffs that put them out of reach for many African consumers, whereas kit-based approaches make electric mobility more affordable today. Saglev’s initiative reflects a broader trend: CIG Motors, NEV Electric, and regional players in Cote D’Ivoire, Ghana, and Kenya are also leveraging imported kits to build local EV ecosystems, signaling that parts of West Africa are intent on catching up with global electrification efforts.

Expanding the Local EV Ecosystem

CIG Motors operates a kit-assembly plant in Lagos producing vehicles from Chinese automakers GAC Motor and Wuling Motors. These vehicles include the Wuling Bingo, a compact five-door electric hatchback, and the Hongguang Mini EV Macaron, a microcar with roughly 200 kilometers of range aimed at ride-share operators looking for ultralow-cost urban transport. NEV Electric focuses on electric buses and three-wheelers for urban transit and last-mile delivery.

Saglev’s CEO, Olu Faleye, emphasizes that Nigeria’s EV transition addresses both practical economic needs in addition to environmental goals. Beyond passenger transport, electric vehicles could help reduce one of Nigeria’s persistent agricultural challenges: post-harvest spoilage. Nigeria loses an estimated 30–40 million tonnes of food annually because of weak logistics and limited refrigeration infrastructure, according to the Organization for Technology Advancement of Cold Chain in West Africa.

Advertisement

Electric vans, mini-trucks, and three-wheel cargo vehicles could help close this gap because their batteries can power refrigeration systems during transport without relying on costly diesel fuel. As EV adoption grows and charging infrastructure expands, temperature-controlled transport could become more affordable, reducing spoilage, improving farmer incomes, and helping stabilize food supplies, the organization says.

“I don’t believe that the promised land is making a fully built EV on the ground here.”
–Olu Faleye, Saglev CEO

Beyond Nigeria, Mombasa, Kenya–based Associated Vehicle Assemblers has begun assembling electric taxis and minibuses from imported kits, and Ghana’s government is spurring kit-car assembly there under its national Automotive Development Plan. In Ghana, assemblers benefit from import-duty exemptions on kits and equipment, corporate tax breaks, and access to industrial infrastructure. Saglev is already availing itself of those benefits, at its kit-assembly plant in Accra. The company says it also plans to expand its assembly operations to Cote D’Ivoire.

Infrastructure Challenges and Workarounds

Despite these signs that West Africa’s EV ecosystem is gaining traction, limited grid reliability and sparse public charging infrastructure remain major barriers to widespread EV adoption. Urban households in Nigeria experience roughly six or seven blackouts per week, each lasting about 12 hours, according to Nigeria’s National Bureau of Statistics. That’s more downtime each day than the average U.S. household experiences in a year. More than 40 percent of households rely on generators, which supply about 44 percent of residential electricity, according to research by Stears and Sterling Bank.

Advertisement

Many early EV adopters therefore charge vehicles using gasoline or diesel generators. Faleye notes that Nigerians have long relied on such workarounds and expects fossil fuels to remain part of the EV charging equation for the foreseeable future—at least until falling costs for solar panels and battery storage make cleaner charging viable.

He acknowledges that charging EVs using hydrocarbons is fraught from an environmental perspective, but he points out that the practice at least brings other benefits of EVs, including lower maintenance costs and the EVs’ synergies with refrigeration and transportation logistics. And he points to a 2020 peer-reviewed study in the journal Environmental and Climate Technologies that compared the overall efficiency of internal combustion vehicles and electric vehicles across the full well-to-wheel energy chain. The study’s conclusion: Even after accounting for conversion losses, generating electricity with a diesel or gasoline generator to power an electric vehicle can remain just as efficient overall as burning the same fuel directly in a vehicle’s internal combustion engine.

Scalable EV Adoption in Nigeria

The approach taken by Saglev and other Nigerian kit-car builders shows how local assembly can advance EV adoption even where infrastructure remains unreliable. By starting with kits, companies can deploy practical electric mobility solutions now while building the supply chains and technical expertise needed for more resource-intensive localized production.

Still, when asked whether Saglev plans to eventually move beyond kit assembly to independent design and manufacturing of EVs, Faleye calls such a move impractical.

Advertisement

“I don’t believe that the promised land is making a fully built EV on the ground here,” he says. “For me to do efficient vehicle manufacturing, I’d need a lot of robotics and 3D printing. That expense is unnecessary—it would just increase costs and make EVs more expensive.”

In a country where electricity can disappear for days, Nigeria’s kit-based EV strategy highlights a practical truth: incremental progress and ingenuity may matter more than perfect infrastructure. For Saglev, every kit-based vehicle rolling off the line is not just a van or bus—it’s a step toward an EV ecosystem that works for Nigeria’s realities today.

From Your Site Articles

Related Articles Around the Web

Advertisement

Source link

Continue Reading

Tech

Scientists can now build a cyborg cockroach that can pull miniature rigs through pipelines to find leaks

Published

on


The program is led by Hirotaka Sato, a professor at NTU’s School of Mechanical and Aerospace Engineering and a recognized pioneer in the field of cyborg insects. His work first gained international attention years ago when he achieved the first remotely controlled flight of a cyborg beetle – a milestone…
Read Entire Article
Source link

Continue Reading

Tech

inKONBINI Lets You Spend Summer Days Behind the Register

Published

on

inKONBINI Game Switch
inKONBINI: One store. Many Stories transports you to a small-town convenience store during a peaceful Japanese summer in the early 1990s. Makoto Hayakawa, a college student on a break from school, walks into her aunt’s shop and realizes that the routine of working there is far more intriguing than simply stacking shelves. Every time you set the shelves exactly right, the bell above the door rings, and a customer walks in, you get a little further into a world where regular hours reveal all sorts of surprising connections.



As the days pass, the pace slows, with you replenishing shelves with all of the traditional snacks and drinks of the time, tidying up the display so it catches the light just right, and taking orders for the regulars who keep coming in searching for the same thing. Nothing is hard or time-consuming, but there’s a calm satisfaction in getting it all done, and you find yourself falling into a routine that seems very fulfilling on its own.


Nintendo Switch 2 + Mario Kart World Bundle
  • This bundle includes a system and a full GAME DOWNLOAD for the Mario Kart World game, exclusive to Nintendo Switch 2. Limited quantities. While…
  • One system, three play modes: TV, Tabletop, and Handheld
  • Larger, vivid, 7.9” LCD touch screen with support for HDR and up to 120 fps


People begin to arrive throughout the day, including a regular who orders the same drink every afternoon, and you can always expect a brief conversation with them, although another customer may stay a little longer, and you must determine how to respond. Your choices are important; one polite word or question might reveal a completely other side of their tale, revealing how their life continues after they leave the business. As you see them over and over, you get a sense of how your own summer is intertwined with theirs.

inKONBINI Game Switch Screenshot
Exploring the shop is also an important feature of the game; players can roam through every inch of the single-level shop, looking for all the nooks and crannies behind the counters and in the back of the shelves for all the little notes and lost goods that get left behind. Each of these minor findings reveals a bit more about the shop’s history. You have a landline phone on the wall that you can use to contact your aunt for advice, to solve an issue, or to simply speak with a customer you’ve become friendly with. Each call feels like you’re reaching out to an entire world beyond the shop that continues to exist, just out of sight.

inKONBINI Game Switch Screenshot
The day is broken up by little surprises that keep things interesting, such as the old gachapon machine in the corner, which is simply waiting for coins. You flip the handle and get a capsule with a small toy inside; it’s a small item, but it’s enough to pique your interest and keep you going after a long morning at work. On April 30, 2026, inKONBINI will be available on the Nintendo Switch, Switch 2, PlayStation 5, Xbox consoles, and PC via Steam.

Advertisement

Source link

Continue Reading

Tech

Your Phone Pinging Hijacks Your Brain for 7 Seconds, Study Finds

Published

on

The soft ping or buzz on your phone that lets you know a new message has arrived is hard to ignore. But it can mean trouble when it’s time to concentrate on a task, according to a new study that will be published in the June issue of the journal Computers in Human Behavior. 

The study found that whenever we receive a message notification, it interrupts our concentration for 7 seconds. It turns out that the type of information that we see in the notification also matters. The more personally relevant the notification, the larger the distraction.

“This interruption likely arises from several mechanisms, such as [a notification’s] perceptual prominence, the conditioning acquired through repeated exposure, and the possible social significance,” Hippolyte Fournier, a postdoctoral fellow at the University of Lausanne in Switzerland and the study’s first author, told CNET.

Advertisement

While 7 seconds may not seem like much, we get a lot of notifications throughout the day, and those seconds can add up. 

“We observed that both the volume of notifications and how often individuals check their smartphones were linked to greater disruption,” Fournier said. “This pattern suggests that the fragmented nature of smartphone use, rather than simply total usage duration, may be a key factor in understanding how digital technologies influence attentional processes.”

Attention hijack

The study used a Stroop task, a test that measures how quickly you can process information and how well you can focus. Colored words flash across a screen for the test. The font of each word is one color, but the text of the word is a different color. So the word “blue” might be written in green font.

Advertisement

You have to identify the font color and ignore the color that the word spells out. It’s a lot harder than it sounds. You can take the test yourself using this YouTube video. 

The researchers recruited 180 university students for the study. The students were randomly split up into three groups. All students received a Stroop task, and notifications popped up on the screen as they completed the test. But the researchers slightly changed the experiment for each group.

The researchers told the first group that the screen was mirroring their personal phones, so the students thought they were seeing their real notifications.

The second group saw pop-ups on the screen that looked like real social media notifications, but the group knew they were false. This helped the researchers test how learned habits impact attention, without personal relevance. 

Advertisement

The third group saw only blurry notifications, with illegible text. The researchers used this test to determine how the visual distraction of an unexpected pop-up affected the group’s attention. 

The notifications slowed students’ ability to process information by about 7 seconds across all three groups. But for students who thought they were getting real notifications, the delay was more pronounced. 

“Although it is well documented that notifications can automatically attract attention, far less is understood about the cognitive processes that drive this attentional capture and the reasons why some people may be more susceptible than others,” Fournier said. “Our objective was to gain a better understanding of both the underlying mechanisms and the individual differences that could account for this variability in sensitivity.”

Brain delay

In the US, 90% of all people own a smartphone, according to Pew Research, and a Harmony Healthcare IT study found that we spend over 5 hours a day using them. But how long we spend on our phones may not matter as much as how often we check our notifications.

Advertisement

“In a lab study designed to mimic real-life notification exposure, we found that the frequency of notifications and checking habits mattered more than total screen time,” Fabian Ringeval, another of the paper’s authors, wrote in a LinkedIn post. “The more often we interact with our phones, the more vulnerable our attention becomes to interruption.” 

Anna Lembke, a psychiatry professor at Stanford, told CNET that the study mirrors what she sees clinically and in research literature, “namely that the level of engagement — for example how many notifications a person gets and how quickly they respond to notifications — is as big a predictor, or an even bigger predictor, of harmful, problematic use than time spent.”

Researchers found that study participants received about 100 notifications per day. So the notifications we get on our phones could be slowing down our cognitive abilities through near-constant distraction. 

“In everyday situations that require continuous attention — like driving or learning — even short slowdowns can add up,” Ringeval wrote. “Our findings suggest that improving digital well-being may be less about ‘using our phones less’ and more about reducing unnecessary interruptions.”

Advertisement

Lembke said it’s fair to worry about how smartphone notifications impact our attention, “which is why platforms for minors should silence notifications by default and make it difficult to re-activate notifications without parental consent, and why adults should electively turn off notifications to improve concentration and well-being, with rare exceptions for safety reasons.”

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025