Connect with us
DAPA Banner

Tech

S’pore saw the biggest drop in job postings in 5 yrs

Published

on

Just as hiring appeared to be picking up, Singapore’s job market has reversed course. Job postings have fallen to the lowest level since Mar 2021, according to the latest report from job listing portal Indeed.

In Feb 2026, postings dropped 4.5% to sit 12% below a year ago, a decrease that has more than offset three consecutive months of gains.

Still, the headline figures only tell part of the story. Hiring trends vary significantly across sectors, with gains in some occupations offset by steep declines in others.

Here’s a breakdown of how job postings have shifted across different roles:

Advertisement

The winners & losers

Indeed provides a rolling three-month breakdown for the professions with the largest swings in demand, so we can compare which jobs are going up and which are falling out of favour.

Job postings rose in around half of occupational categories over the past three months, led by gains in IT infrastructure, operations & support (+19%), arts & entertainment (+16%) and software development (+15%).

Interestingly, some of the strongest gains were recorded in occupations that have a high exposure to AI transformation.

But those gains were offset by steep declines elsewhere.

Postings fell sharply in childcare (-29%), dental (-23%) and civil physicians & surgeons (-18%), with education and healthcare among the sectors seeing the most pronounced pullback in recent months.

Advertisement

More remote work opportunities

The report also found that remote work is gradually gaining ground.

In February, 8.6% of all job postings explicitly mentioned terms like “work from home” or “work remotely.” That’s a slight increase from the 8.4% recorded a year ago, and has climbed from a post-pandemic low of 6.9% in late 2022. 

Remote work opportunities are most common in IT systems & solutions at 15.6% of postings in the February quarter, ahead of sales (15.5%) and media & communications (14.0%). 

Large gains were also seen in occupations that have traditionally offered few remote or hybrid opportunities. Social sciences (+4.5%) and real estate (+3.5%) led the way.

But not all sectors are moving in the same direction.

Advertisement

Remote postings fell sharply in insurance (-7.5%), human resources (-6.0%), architecture (-5.3%), and electrical engineering (-3.6%), underscoring how remote work trends vary widely across occupations.

The labour market still remains strong

On the whole, though, it seems like the labour market situation in Singapore still remains stable and strong, despite the decrease in the total number of openings.

At the end of Feb, job postings were still 32% above pre-pandemic levels.

The post-pandemic job boom in Singapore was so large that even though job postings are down 45% from their peak in Jul 2022, it’s still sufficiently high to keep the unemployment rate low—just 2% at the end of last year.

Advertisement

However, the Singapore economy will face some stiff economic headwinds this year as the conflict in the Middle East triggers higher inflation and increased cautiousness from households and businesses alike.

With a tight labour market and solid economic growth last year, Singapore is relatively well positioned to weather these challenges. Nevertheless, the economic outlook has softened in recent weeks, underscoring the risks ahead.

Indeed expects that job opportunities will continue to moderate over the course of 2026. 

  • Read other articles we’ve written on Singaporean businesses here.

Featured Image Credit: Shadow_of_light/ depositphotos

Advertisement

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

How AI-Powered Identity Verification is Redefining Business Security

Published

on

Passwords have been the standard of online security. Next was the two-factor authentication. Then security questions, CAPTCHA, and fingerprinting of devices. Every layer introduced with a new threat. Both were ultimately defeated by more advanced scams.

The trend is obvious: any security system relying on what one knows or possesses will be susceptible to theft, copying, or social engineering. The one verification level that is truly hard to counterfeit is who someone is – and that is exactly where artificial intelligence has transformed all that.

Identity verification using AI is no longer a niche technology that is implemented only by banks and governmental agencies. It is also going to be the minimum security requirement of any business onboarding clients digitally, transacting high-value deals, or working within a regulated sector in 2026. The knowledge of how it works, why it is important, and how to apply it is now a business competency rather than an IT issue.

The Issue Classic Security Cannot Address

It is only prudent to know what AI-driven identity verification is meant to address before delving into how it works, since the threat landscape has changed drastically.

Advertisement

Credential breach has rendered credentials a worthless security signal. The Cost of a Data Breach Report by IBM indicated that in 2024, the mean data breach involved more than 25,000 records. Out of the thousands of breaches that have taken place worldwide in the last ten years, billions of usernames and passwords, social security numbers, dates of birth, and answers to security questions are now being sold on the dark web. With access to such databases, a fraudster can easily pass through traditional credential checks since the credentials are authentic, only that they are owned by a different person.

The synthetic identity fraud has generated a new breed of criminal. More than stealing an existing identity, advanced fraudsters are building identities, assembling a real Social Security number (usually that of a child or an aged individual with no credit history) with invented names, addresses, and biographical information. These artificial identities can withstand a simple verification check since some of the information is authentic. They are mostly unnoticed by traditional rules-based fraud detection systems.

Deepfakes created by AI have defeated selfie-based authentication. The fast development of generative AI has brought about tools that are capable of generating photorealistic fake images, videos, and even real-time video feeds of non-existent individuals within minutes. The days of systems utilizing a mere selfie photo to verify identity are long gone, with fraudsters capable of uploading a deepfake image that, visually, resembles a real photo.

Credential theft, synthetic identity fraud, and AI-generated deepfakes are the three converging threats that next-generation AI-powered identity verification is designed to deal with.

Advertisement

The reality of what AI-Powered Identity Verification does

Identity verification is not just an AI-based technology. It is a multi-tiered system of a series of AI models operating together to determine with high probability that an individual is who they claim they are.

Document Authentication

The initial layer is document checking. A user enters a government-issued identity document, passport, driver’s license, national ID card, and an AI model compares it with thousands of known document templates that exist in the world.

The level of the analysis is much higher than determining whether the document is real. Machine learning algorithms trained on millions of real and fake documents analyze the quality of microprints, the presence of UV patterns, holographic elements, font authenticity, MRZ (Machine Readable Zone) information integrity, and pixel-level anomalies (which signify editing and manipulation). Digitally manipulated documents (even in subtle ways) are detected within seconds.

Advertisement

The system of document verification is available in modern document verification systems that can verify more than 14,000 types of documents representing more than 190 countries, which would not otherwise be feasible to verify manually.

Biometric Face Matching

When the document has been verified, the system will compare the face on the document to a live selfie or a video submission by the individual purporting to be the document holder. In AI facial recognition models, the geometric distance between facial features, such as the distance between eyes, nose shape, jaw angle, and a confidence score of the match, is calculated.

It is a quick, precise, and much more dependable method than a visual inspection by people. Research by the National Institute of Standards and Technology (NIST) consistently reported that the best facial recognition algorithms perform better than human examiners in face matching tasks, especially when there are changes in lighting, angle, and age.

Advertisement

Liveness Detection

It is the layer that deals with deepfake fraud in particular, and it is in this area that AI has achieved the most critical progress.

Liveness detection identifies when the face presented is that of a real, physically present human being, or whether it is a photograph, printed mask, video recording, or deepfake generated by a computer AI. Passive liveness detection examines a single image of slight signs of non-liveness: texture anomalies, unnatural light reflection, absence of micro-movements, or compression artifacts suggesting a screen capture. Active liveness detection requires the user to do randomized behaviors: blink, move their head, smile, which are virtually impossible to impersonate by a still image and computationally infeasible to spoof by a live deepfake.

Passive and active liveness detection combined has increased the threshold to deepfake fraud attacks to the extent that the cost of a successful attack is usually more economical than the fraudulent value, and AI-generated identity fraud attacks are thus not economical in most criminal activities.

Advertisement

Cross-Referencing of Data and AML Screening

Outside the biometric layer, identity verification systems built with AI will cross-verify the verified identity data against external databases in real-time. This encompasses global sanctions lists, Politically Exposed Persons (PEP) databases, adverse media sources, and watchlists that are managed by regulatory agencies such as the OFAC, the UN, and the EU.

It is this AML screening layer that makes identity verification a compliance tool, as well as a security tool, such that businesses can fulfill their Know Your Customer (KYC) and Anti-Money Laundering (AML) requirements alongside the verification check, instead of as a downstream operation.

The Importance of Thematically Integrated Security to Business Security – Not Just Compliance

The argument of AI-based identity verification as compliance is well-established. In practically every jurisdiction, financial services companies, fintech, and other regulated businesses are required to perform KYC and AML processes on a compulsory basis. Failing to meet them carries substantial financial penalties and reputational risk.

Advertisement

However, the business security case is far bigger than regulatory compliance – and to most businesses, the non-compliance risks pale in comparison with the direct losses of fraud that can be easily facilitated by poor identity verification.

Businesses are directly affected by account takeover fraud. Once a fraudster manages to create a successful impersonation of an authentic customer in the process of onboarding or recovering an account, they access the available accounts, payment methods, and stored credit. The ensuing chargebacks, frauds, and dispute settlements are more on the business side than the card network. Account takeover fraud is a major and increasing direct operating expense to e-commerce companies and financial technology applications.

New account fraud generates unpayable debts. Synthetic identity fraud generally leads to the so-called bust-out schemes in which a fraudster accumulates credit exposure on a variety of products, and then defaults on all of them at once. To lenders, credit providers, and buy-now-pay-later sites, the damages of a single synthetic identity that has been nurtured over months can go into tens of thousands of dollars.

Financial loss is compounded by reputational loss because of instances of fraud. In cases where clients of a business fall victim to fraud by a security breach on a platform, the reputational loss is more than just the direct financial loss. The loss of customers, media attention, and regulatory investigations after a fraud incident can be even more expensive than the actual losses incurred in the fraud itself – especially to a business in which trust is the product.

Advertisement

At the onboarding stage, AI-based identity checks prevent the vast majority of such attack vectors, prior to the creation of a fraudulent account. Compliance cost avoidance is only part of the payback; it is the avoidance of downstream fraud losses that grow with business expansion.

Real-life Application: What Companies Should know

The practical considerations of AI-powered identity verification extend beyond the technology when business leaders consider this technology.

Should Be API-First Deployment

Contemporary identity verification systems are implemented through API integration – linking your onboarding process with the verification service without having the customer leave your site. This retains the customer experience and facilitates instant verification decisions. Find options that enable integration of SDKs in mobile applications and provide a webhook-based delivery of decisions to reduce the onboarding latency.

Advertisement

Risk Level should be configurable to determine Verification Decisions

Customers do not pose the same fraud risk, and not all transactions need the same level of verification. An effective AI-based solution enables companies to set up verification processes according to risk indicators – introduce lightweight document verification to transactions with low risks and complete biometric verification with liveness detection to high-value or high-risk onboarding situations. This risk-based model maintains conversion rates among legitimate customers and focuses verification resources where the fraud risk is the greatest.

Audit Trails are Not Negotiable

Each verification decision, be it approval, rejection, or flagged to undergo manual review, should be recorded with a time stamp, the particular methods used to verify, the confidence levels delivered by the methods, and the documentation. Such an audit trail is necessary in regulatory audits, chargeback audits, and internal fraud audits. Firms that are subject to FINTRAC, GDPR, or other regulations must generate such records when they are requested, usually in 30 days or fewer.

Advertisement

Should Be Constructed Human Review Escalation

AI verification systems are extremely precise, yet no computerized system can be 100 percent confident in all cases. Good implementations involve a queue of cases with AI confidence less than a set-point – often around 5-10% of all verifications. The edge cases that are not detected by the automated systems are picked by human reviewers looking at the flagged cases, and their verdicts are used to inform further improvement of the model.

Select a Partner that has Worldwide Document Covers

When you have customers in a variety of countries, your identity verification provider should accept document types in those countries. An optimized system for North American documents will result in an unacceptable high rate of false rejection of customers with a Southeast Asian, Middle Eastern, or African identity document. Such solutions as the document verification offered by Shufti Pro can work with documents issued in 190+ countries – an essential feature that businesses with international clientele can use.

Advertisement

The Competitive Advantage of this Right

The divide between companies that have invested in solid identity verification infrastructure and those that have not is widening, and the difference has repercussions beyond losses in fraud.

The relations between payment processors are based on fraud indicators. The card networks and payment processors keep a close eye on the chargeback rates and the fraud rates. Companies with low fraud traces due to proper identity checking receive superior processing rates, increased transaction limits, and preference of merchants. Companies that have higher fraud rates will be charged higher fees, delays in processing, and, in the worst case, the merchant account will be shut down.

Security posture is also necessary to acquire enterprise clients. Enterprise customers: Large enterprises (especially in the financial services, medical, and government contracting) perform vendor security testing before contracting. Documented, auditable identity verification and fraud prevention program is becoming a condition to winning enterprise business, and not a differentiator.

Fraud infrastructure is studied in investor due diligence. In the case of growth-stage businesses that are in need of investment, fraud prevention infrastructure is part of due diligence. Fintech, e-commerce, and SaaS investors prefer to observe that the business has developed security basics that can scale up since fraud losses that can be controlled at the early stage become existential at the growth stage when the infrastructure is lacking.

Advertisement

The Future: The Future of AI Identity Verification

The technology does not stand still. Several trends are underway transforming AI-driven identity verification in 2026 and beyond.

Continuous authentication has passed onboarding. Instead of authenticating identity when creating an account, AI systems are starting to track behavioral indicators, such as typing patterns, mouse motions, transaction activities, etc., in real time, and used in the course of a user session, which indicates anomalies that may indicate account takeover.

There is an increasing regulatory trend toward decentralized identity frameworks, in which verified credentials are stored by the user, but not by individual businesses, both in the EU and Canada. These frameworks minimise the data liability that businesses already bear when it comes to storing identity documents and biometric data.

There are ever-growing regulatory requirements across the world. Fintrac of Canada, the AML package of the EU, and other systems in Asia-Pacific are increasing standards of identity verification – that is, what is best practice now will become legal minimum tomorrow.

Advertisement

Concluision

The paradigm change that AI-enabled identity verification will be a transition to proactive security, rather than reactive security. Conventional methods identified fraud only once it occurred, by way of chargeback, account audits, and forensic audits. Verification, which is AI-based, detects fraud when it is attempted – before creation of a fraudulent account, before a stolen identity being impersonated, before a deep fake passing through an onboarding test.

In the case of businesses that are scaling, that change does not qualify as a security upgrade. It is a foundation. Survivable losses of fraud at a small scale are devastating at the growth stage. It is the businesses that develop strong identity verification infrastructure early that develop without the compounding drag of costs associated with fraud, compliance failures, and reputational incidents slowing them down.

With the cost of impersonation in a digital economy falling to almost zero, the cost of not authenticating identity is increasing year after year. 

Advertisement

Source link

Continue Reading

Tech

Decentralized AI Training Turns Homes Into Data Hubs

Published

on

Artificial intelligence harbors an enormous energy appetite. Such constant cravings are evident in the hefty carbon footprint of the data centers behind the AI boom and the steady increase over time of carbon emissions from training frontier AI models.

No wonder big tech companies are warming up to nuclear energy, envisioning a future fueled by reliable, carbon-free sources. But while nuclear-powered data centers might still be years away, some in the research and industry spheres are taking action right now to curb AI’s growing energy demands. They’re tackling training as one of the most energy-intensive phases in a model’s life cycle, focusing their efforts on decentralization.

Decentralization allocates model training across a network of independent nodes rather than relying on one platform or provider. It allows compute to go where the energy is—be it a dormant server sitting in a research lab or a computer in a solar-powered home. Instead of constructing more data centers that require electric grids to scale up their infrastructure and capacity, decentralization harnesses energy from existing sources, avoiding adding more power into the mix.

Hardware in harmony

Training AI models is a huge data center sport, synchronized across clusters of closely connected GPUs. But as hardware improvements struggle to keep up with the swift rise in size of large language models, even massive single data centers are no longer cutting it.

Advertisement

Tech firms are turning to the pooled power of multiple data centers—no matter their location. Nvidia, for instance, launched the Spectrum-XGS Ethernet for scale-across networking, which “can deliver the performance needed for large-scale single job AI training and inference across geographically separated data centers.” Similarly, Cisco introduced its 8223 router designed to “connect geographically dispersed AI clusters.”

Other companies are harvesting idle compute in servers, sparking the emergence of a GPU-as-a-Service business model. Take Akash Network, a peer-to-peer cloud computing marketplace that bills itself as the “Airbnb for data centers.” Those with unused or underused GPUs in offices and smaller data centers register as providers, while those in need of computing power are considered as tenants who can choose among providers and rent their GPUs.

“If you look at [AI] training today, it’s very dependent on the latest and greatest GPUs,” says Akash cofounder and CEO Greg Osuri. “The world is transitioning, fortunately, from only relying on large, high-density GPUs to now considering smaller GPUs.”

Software in sync

In addition to orchestrating the hardware, decentralized AI training also requires algorithmic changes on the software side. This is where federated learning, a form of distributed machine learning, comes in.

Advertisement

It starts with an initial version of a global AI model housed in a trusted entity such as a central server. The server distributes the model to participating organizations, which train it locally on their data and share only the model weights with the trusted entity, explains Lalana Kagal, a principal research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) who leads the Decentralized Information Group. The trusted entity then aggregates the weights, often by averaging them, integrates them into the global model, and sends the updated model back to the participants. This collaborative training cycle repeats until the model is considered fully trained.

But there are drawbacks to distributing both data and computation. The constant back and forth exchanges of model weights, for instance, result in high communication costs. Fault tolerance is another issue.

“A big thing about AI is that every training step is not fault-tolerant,” Osuri says. “That means if one node goes down, you have to restore the whole batch again.”

To overcome these hurdles, researchers at Google DeepMind developed DiLoCo, a distributed low-communication optimization algorithm. DiLoCo forms what Google DeepMind research scientist Arthur Douillard calls “islands of compute,” where each island consists of a group of chips. Every island holds a different chip type, but chips within an island must be of the same type. Islands are decoupled from each other, and synchronizing knowledge between them happens once in a while. This decoupling means islands can perform training steps independently without communicating as often, and chips can fail without having to interrupt the remaining healthy chips. However, the team’s experiments found diminishing performance after eight islands.

Advertisement

An improved version dubbed Streaming DiLoCo further reduces the bandwidth requirement by synchronizing knowledge “in a streaming fashion across several steps and without stopping for communicating,” says Douillard. The mechanism is akin to watching a video even if it hasn’t been fully downloaded yet. “In Streaming DiLoCo, as you do computational work, the knowledge is being synchronized gradually in the background,” he adds.

AI development platform Prime Intellect implemented a variant of the DiLoCo algorithm as a vital component of its 10-billion-parameter INTELLECT-1 model trained across five countries spanning three continents. Upping the ante, 0G Labs, makers of a decentralized AI operating system, adapted DiLoCo to train a 107-billion-parameter foundation model under a network of segregated clusters with limited bandwidth. Meanwhile, popular open-source deep learning framework PyTorch included DiLoCo in its repository of fault tolerance techniques.

“A lot of engineering has been done by the community to take our DiLoCo paper and integrate it in a system learning over consumer-grade internet,” Douillard says. “I’m very excited to see my research being useful.”

A more energy-efficient way to train AI

With hardware and software enhancements in place, decentralized AI training is primed to help solve AI’s energy problem. This approach offers the option of training models “in a cheaper, more resource-efficient, more energy-efficient way,” says MIT CSAIL’s Kagal.

Advertisement

And while Douillard admits that “training methods like DiLoCo are arguably more complex, they provide an interesting tradeoff of system efficiency.” For instance, you can now use data centers across far apart locations without needing to build ultrafast bandwidth in between. Douillard adds that fault tolerance is baked in because “the blast radius of a chip failing is limited to its island of compute.”

Even better, companies can take advantage of existing underutilized processing capacity rather than continuously building new energy-hungry data centers. Betting big on such an opportunity, Akash created its Starcluster program. One of the program’s aims involves tapping into solar-powered homes and employing the desktops and laptops within them to train AI models. “We want to convert your home into a fully functional data center,” Osuri says.

Osuri acknowledges that participating in Starcluster will not be trivial. Beyond solar panels and devices equipped with consumer-grade GPUs, participants would also need to invest in batteries for backup power and redundant internet to prevent downtime. The Starcluster program is figuring out ways to package all these aspects together and make it easier for homeowners, including collaborating with industry partners to subsidize battery costs.

Backend work is already underway to enable homes to participate as providers in the Akash Network, and the team hopes to reach its target by 2027. The Starcluster program also envisions expanding into other solar-powered locations, such as schools and local community sites.

Advertisement

Decentralized AI training holds much promise to steer AI toward a more environmentally sustainable future. For Osuri, such potential lies in moving AI “to where the energy is instead of moving the energy to where AI is.”

From Your Site Articles

Related Articles Around the Web

Source link

Advertisement
Continue Reading

Tech

Avalanche Energy lands share of $5.2M DOD award to develop long-lasting ‘nuclear batteries’

Published

on

An early prototype of Avalanche Energy’s radiovoltaic converter for the DARPA Rads to Watts program is exposed to high-energy ion-beam irradiation. (Avalanche Photo)

Seattle fusion startup Avalanche Energy was awarded a share of a $5.2 million contract announced Wednesday from the U.S. Department of Defense to develop compact nuclear batteries.

The award comes from the DARPA Rads to Watts program, which is focused on building long-lasting batteries for defense and space applications where chemical batteries, solar power and refueling are not possible.

Avalanche is focused on engineering micro-fabricated energy cells that turn alpha particles emitted by radioactive material into electricity. The process, the team said, is analogous to solar cells converting photons into electricity.

“The goals are to produce a device that has a long lifetime, and that can produce orders of magnitude more power than current technologies,” said Daniel Velázquez, Avalanche’s physicist and materials science lead. The target is a battery that could continuously power a laptop computer, for example, for many months but weighs roughly 10 pounds.

And the timeline is tight. By the end of the 30-month program, the objective is to validate the physics involved and develop a power-producing prototype.

Advertisement

“It’s very ambitious,” Velázquez said.

Avalanche is leading the team tackling DARPA’s nuclear battery challenge, which includes the University of Utah, Caltech, Los Alamos National Laboratory and McQuaide Microsystems.

Others are also working on nuclear batteries, including Seattle’s Zeno Power. The startup plans to demonstrate its first full-scale radioisotope power system this year and commercially produce nuclear batteries by 2027.

While Avalanche is ultimately working to develop a compact device that creates energy from fusion — the reactions that power the sun — the DARPA project feeds directly into that longer-term goal, Velázquez said. There are direct parallels to capturing energy from a nuclear battery and from fusion reactions.

Advertisement

That should help the company compete in the global race to commercialize fusion power, which could provide nearly limitless clean energy. To support domestic enterprises, the Department of Energy is slated to commit a record-setting $135 million over 18 months to accelerate fusion research, Axios reported today.

Demand for new power is spiking with the expansion of data centers and the shift from fossil fuels to electrification.

Since launching in 2018, Avalanche has pursued multiple lines of revenue. Last month, the company announced it’s part of a team receiving $1.25 million from AFWERX, the innovation arm of the Department of the Air Force, to develop advanced materials for extreme environments.

Other efforts include using its fusion machine to produce neutrons for commercial customers; a Pentagon contract to develop technology for space propulsion; and a state grant to launch FusionWERX, a commercial-scale testing facility for fusion technologies in Eastern Washington.

Advertisement

In February, Avalanche announced $29 million in new funding from investors, bringing its total to more than $105 million across venture capital and government grants — a war chest the company is deploying across fusion, propulsion and now compact nuclear batteries.

Source link

Continue Reading

Tech

Fi Mini for Cats Review: Track Your Pets and Monitor Their Activity

Published

on

Within the app, you can add safe zones, more pets with Fi trackers, and other users who can also track and monitor the pet. There’s a Health tab where you can add and store things like vet records, receipts, and insurance information, and add vets to easily share your pet’s documents and get appointment reminders. You can also set up the Fi app on your Apple Watch to have even quicker access to monitor your pet’s location, activity, and safety (including Lost Mode) without needing a phone.

When you open the app, you’ll see a map with live tracking showing where your pet is currently, as well as a notification of the last time they were outside and where they were. With the latter, you can pull up stats like location timeline, showing where they were and when. If you dive into any day when the tracker left the home, it will recreate the route, following the path and calculating the distance the pet traveled.

There’s also health-monitoring data from activity and sleep tracking, which is most useful for an indoor-only pet like mine. Like other health-tracking collars, stats for sleep and activity aren’t 100 percent accurate, as the app uses GPS to track movement, categorizing “activity” when the animal is moving and “sleep” when the pet is still for a prolonged period. This means that if Basil was awake but stationary, the app may inaccurately categorize this as sleep.

Image may contain Text

Fi Mini App source Molly Higgins

In the Rest tab, you can see sleep metrics, including a daily summary of deep sleep, naps, and interruptions during nightly sleep. You can compare this over time, and the app notes how much more or less Basil slept than the night before. It also compares stats historically, by week, month, and year, so you can track trends and better understand your pet’s normal sleep schedule.

Advertisement

The Activity tab is similar, tracking activity by day, week, and month, noting in the day’s timeline when the pet was most active and for how long. This also compares activity to the day before. I liked looking at the weekly report, comparing days during the week to see which he was most active during and if any patterns in activity popped up.

For example, I noticed that his sleep-versus-activity schedule was similar to mine, except he was active between 4:45 and 6:30 am (while I was still asleep), because that’s when his automatic feeder goes off for breakfast and my roommate is getting ready to leave for work. He was most active in the evenings, when I feed him dinner, have dedicated playtime, and my roommates are home, so there’s more activity to keep him awake. Historical comparison is also a super helpful way to track whether your pet is sleeping more or becoming more lethargic—an early warning sign of a bigger health problem.

Not Without Its Quirks

Since my cat is indoor-only, I ran some experiments to track location, using GPS on both the Fi Mini tracker and my phone. I also had a friend take the tracker out without my phone nearby to see whether I’d get pinged that “Basil” had left the safe zone.

Although it is better than not being alerted at all, the Fi’s GPS has limitations (as did the Tractive tracker I tested). It needs a strong signal to communicate with cell towers for accurate location. If your phone is close to the smart collar (via Bluetooth), it uses that instead of the Fi’s GPS, making it more accurate and alerting quicker. If the pet gets loose and is out of range of your phone, it uses the collar’s cellular antenna (in this case, Verizon cell towers). But because the Fi’s antenna isn’t as strong as a phone’s, location accuracy is lower, and the connection can be very spotty, especially if your pet is in the country or on acreage where cell towers are farther away.

Advertisement

Source link

Continue Reading

Tech

Eurail says December data breach impacts 300,000 individuals

Published

on

Eurail

Eurail B.V., a European travel operator that provides digital passes covering 33 national railways, says attackers stole the personal information of over 300,000 individuals in a December 2025 data breach.

Eurail is a Netherlands-based company that sells Interrail and Eurail passes for multi-country train travel across Europe, passes that are also available to young Europeans through the EU’s DiscoverEU program.

When it disclosed the incident in February, the company said the attackers gained access to travelers’ sensitive information, including full names, passport details, ID numbers, bank account IBANs, health information, and contact details (email addresses, phone numbers), after breaching its customer database.

Wiz

Eurail also warned at the time that the threat actors had published a sample of the stolen data on Telegram and were attempting to sell it on the dark web.

“The evidence showed that an unauthorized actor transferred files from our network on December 26, 2025,” the European train travel company said in breach notification letters sent to affected individuals on March 27.

Advertisement

“We reviewed the files involved and, on February 25, 2026, determined that they contained some of your information. The information included your name and passport number.”

The same day, Eurail revealed in a filing with the Office of Oregon’s Attorney General that the resulting data breach impacted 308,777 individuals.

Eurail data breach filing with Oregon's OAG
Eurail data breach filing with Oregon’s OAG (BleepingComputer)

​While Eurail said that it didn’t store financial information or passport photocopies on the compromised systems, the European Commission warned in a separate alert that this type of data (as well as health information) may have been exposed for young travelers who received a Pass through the DiscoverEU program.

Eurail told customers whose information was exposed in the breach to remain vigilant against potential phishing attacks and scams, and advised them to update their Rail Planner app account passwords and reset them on any other platform where they’re also used.

The company added that customers should monitor their bank account activity and report any suspicious transactions to their bank as soon as possible.

Advertisement

Last month, the European Commission also confirmed a data breach after the Europa.eu web platform was hacked in a cyberattack claimed by the ShinyHunters extortion gang.

Automated pentesting proves the path exists. BAS proves whether your controls stop it. Most teams run one without the other.

This whitepaper maps six validation surfaces, shows where coverage ends, and provides practitioners with three diagnostic questions for any tool evaluation.

Source link

Advertisement
Continue Reading

Tech

After Wi-Fi 7's Speed Push, Wi-Fi 8 Is Turning to Reliability

Published

on

Wi-Fi 8 is already taking shape, and while it won’t raise peak speeds beyond Wi-Fi 7, it promises something just as important: more reliable, lower-latency wireless performance where it actually matters.

Read Entire Article
Source link

Continue Reading

Tech

AMD finally puts a price tag on the Ryzen 9 9950X3D2 and it’s hefty

Published

on

AMD has locked in the price for its Ryzen 9 9950X3D2, and it lands high at $899. This is a flagship desktop chip aimed at people who rely on fast systems every day and don’t want to rebuild everything just to get there.

The processor introduces a dual 3D V-Cache setup, which AMD is using to push both gaming and heavy workloads forward at the same time. It also fits into the current AM5 ecosystem, so users with compatible boards and memory can upgrade without replacing the core of their system.

It goes on sale April 22, though there’s no detail yet on how widely it will be available at launch.

Who this $899 chip is for

At $899, this chip sits well outside the mainstream. AMD is going after creators, developers, and power users who notice slowdowns immediately and are willing to pay to avoid them.

Advertisement

The world’s first dual 3D V-Cache™ technology desktop processor.

AMD Ryzen™ 9 9950X3D2 Dual Edition processor

Available April 22 | $899

Workstation-class performance meets the AM5 platform, no new motherboard or memory required.

Built for developers and content creators… pic.twitter.com/rN4ysy45X6

Advertisement

— David McAfee (@McAfeeDavid_AMD) April 8, 2026

The pricing also signals where this part sits in the stack. It’s a top-tier option in the Ryzen 9000 lineup, built to prioritize sustained performance in demanding scenarios rather than broad affordability.

There’s a clear tradeoff here, where you’re paying more upfront to reduce waiting time during real work.

How dual V-Cache changes things

The dual 3D V-Cache design builds on AMD’s earlier work with stacked cache, but pushes it further. Instead of leaning mostly toward gaming gains, this version is meant to handle a wider mix of tasks without compromise.

Advertisement

That shift is important because earlier X3D chips often felt specialized. This one is positioned as more balanced, giving users who split time between games and production workloads a stronger reason to consider it.

Still, AMD hasn’t shared detailed performance figures in this material, so it’s not yet clear how much improvement shows up across different types of work.

Should you upgrade now

Compatibility is one of the more practical advantages here. The Ryzen 9 9950X3D2 works with existing AM5 motherboards and memory, which makes it a simpler upgrade for current users.

That helps take some pressure off the high asking price, especially if you’re already invested in the platform. Instead of planning a full rebuild, you can focus on swapping the processor and moving on.

With the April 22 release approaching, the decision comes down to whether you need the extra headroom now or can wait for more data.

Advertisement

Source link

Continue Reading

Tech

The AI heat trap: why data centers must rethink thermodynamics

Published

on

For decades, the data center industry has operated on a relatively predictable model of thermodynamics. Operators built a hall, filled it with servers, and circulated cold air through the floor or aisles.

The heat load remained predominantly stable, electrical loads increased gradually, and cooling systems could be sized with conservative, static margins.

Source link

Advertisement
Continue Reading

Tech

The Fellowship That Taught Me Good Teaching Doesn’t Require Perfection

Published

on

Becoming a Voices of Change fellow empowered me to believe I could be a teacher with all my flaws — that “perfection” is not necessary. In fact, it is antithetical to good teaching. I remember sitting in our first workshop where we learned how to write a pitch and discussed what successful pitching looks like.

My takeaway from that workshop was that this fellowship was going to push me in ways I’d always been afraid of, that I’d have to practice a kind of vulnerability that went deeper than what I modeled for my students. I’d have to face myself.

The fellowship taught me that what makes me unique is what makes me the best teacher I can be. My individual voice and reflections were what I had to offer, and not just the restatement of well-researched best practices. During my fellowship, I learned that the more vulnerable and specific I was in telling my story as a classroom teacher, the more my voice as a writer would shine through. This sense of authenticity translated into my teaching, as I felt empowered to be myself and to see my differences as gifts.

My essay describing the time when two birds flew into my classroom taught me that play is education, and to this day, I can breathe when things go awry because, through writing that essay, I reaffirmed to myself that it’s okay for curriculum to slow down, for community building to be at the center.

Advertisement

My essay exploring the power of neurodivergence led me to connect with other neurodivergent teachers and reminded me that my experiences are what make me the best teacher I can be. I used to be sad that my brain was built differently, but both the process and the outcome of that essay taught me that being different is a gift to share with others. I was most afraid to write that essay, but now I am most proud of it. I was once again reminded of the power in speaking my truth, especially when I’m most afraid to.

Overall, my essays taught me to pay attention to every moment of teaching, that sometimes the most mundane days of instruction offer kernels of truth and exploration. Topics such as boredom, artificial intelligence and allyship have been explored ad nauseam, but my editor empowered me to see that despite this, I still have a voice worth sharing, even when I didn’t think so.

As a result, I developed a confidence in myself that I carry with me to this day. I became more embodied as a human being, more present, because I realized that what made me me was actually what would allow me to connect more meaningfully with my students and the world. In extending that expansiveness and empathy towards myself, I had more empathy to give my students on their off days and more encouragement to give them on their better days. Ultimately, realizing that the most important stories I had to tell were topics I was too afraid to address publicly made me see that the core of education will always be about courage. Courage to be all of myself, to try new activities outside of and inside the classroom. I had to be ready to share myself to have the biggest impact as a writer. Similarly, I would have to do the same to be the best teacher I could be.

Since completing this fellowship, my identity as a human being has expanded. I now see myself not just as a teacher, but as a writer, a thinker, and an observer who has something to say. I feel more comfortable being me, and even empowered to do so. With each essay, I chipped away at my fears and accepted that the joy was in the process itself. Now, I tell my students something I have had to tell myself repeatedly during this fellowship: trust your voice.

Advertisement

Source link

Continue Reading

Tech

New framework lets AI agents rewrite their own skills without retraining the underlying model

Published

on

One major challenge in deploying autonomous agents is building systems that can adapt to changes in their environments without the need to retrain the underlying large language models (LLMs).

Memento-Skills, a new framework developed by researchers at multiple universities, addresses this bottleneck by giving agents the ability to develop their skills by themselves. “It adds its continual learning capability to the existing offering in the current market, such as OpenClaw and Claude Code,” Jun Wang, co-author of the paper, told VentureBeat.

Memento-Skills acts as an evolving external memory, allowing the system to progressively improve its capabilities without modifying the underlying model. The framework provides a set of skills that can be updated and expanded as the agent receives feedback from its environment.

For enterprise teams running agents in production, that matters. The alternative — fine-tuning model weights or manually building skills — carries significant operational overhead and data requirements. Memento-Skills sidesteps both.

Advertisement

The challenges of building self-evolving agents

Self-evolving agents are crucial because they overcome the limitations of frozen language models. Once a model is deployed, its parameters remain fixed, restricting it to the knowledge encoded during training and whatever fits in its immediate context window.

Giving the model an external memory scaffolding enables it to improve without the costly and slow process of retraining. However, current approaches to agent adaptation largely rely on manually-designed skills to handle new tasks. While some automatic skill-learning methods exist, they mostly produce text-only guides that amount to prompt optimization. Other approaches simply log single-task trajectories that don’t transfer across different tasks.

Furthermore, when these agents try to retrieve relevant knowledge for a new task, they typically rely on semantic similarity routers, such as standard dense embeddings; high semantic overlap does not guarantee behavioral utility. An agent relying on standard RAG might retrieve a “password reset” script to solve a “refund processing” query simply because the documents share enterprise terminology.

“Most retrieval-augmented generation (RAG) systems rely on similarity-based retrieval. However, when skills are represented as executable artifacts such as markdown documents or code snippets, similarity alone may not select the most effective skill,” Wang said. 

Advertisement

How Memento-Skills stores and updates skills

To solve the limitations of current agentic systems, the researchers built Memento-Skills. The paper describes the system as “a generalist, continually-learnable LLM agent system that functions as an agent-designing agent.” Instead of keeping a passive log of past conversations, Memento-Skills creates a set of skills that act as a persistent, evolving external memory.

Read-Write Reflective Learning

Read-Write Reflective Learning (source: arXiv)

These skills are stored as structured markdown files and serve as the agent’s evolving knowledge base. Each reusable skill artifact is composed of three core elements. It contains declarative specifications that outline what the skill is and how it should be used. It includes specialized instructions and prompts that guide the language model’s reasoning. And it houses the executable code and helper scripts that the agent runs to actually solve the task.

Memento-Skills achieves continual learning through its “Read-Write Reflective Learning” mechanism, which frames memory updates as active policy iteration rather than passive data logging. When faced with a new task, the agent queries a specialized skill router to retrieve the most behaviorally relevant skill — not just the most semantically similar one — and executes it.

Advertisement

After the agent executes the skill and receives feedback, the system reflects on the outcome to close the learning loop. Rather than just appending a log of what happened, the system actively mutates its memory. If the execution fails, an orchestrator evaluates the trace and rewrites the skill artifacts. This means it directly updates the code or prompts to patch the specific failure mode. In case of need, it creates an entirely new skill.

Memento-Skills also updates the skill router through a one-step offline reinforcement learning process that learns from execution feedback rather than just text overlap. “The true value of a skill lies in how it contributes to the overall agentic workflow and downstream execution,”  Wang said. “Therefore, reinforcement learning provides a more suitable framework, as it enables the agent to evaluate and select skills based on long-term utility.”

Memento-Skills framework

Memento-Skills framework (source: arXiv)

To prevent regression in a production environment, the automated skill mutations are guarded by an automatic unit-test gate. The system generates a synthetic test case, executes it through the updated skill, and checks the results before saving the changes to the global library.

Advertisement

By continuously rewriting and refining its own executable tools, Memento-Skills enables a frozen language model to build robust muscle memory and progressively expand its capabilities end-to-end.

Putting the self-evolving agent to the test

The researchers evaluated Memento-Skills on two rigorous benchmarks. The first is General AI Assistants (GAIA), which requires complex multi-step reasoning, multi-modality handling, web browsing, and tool use. The second is Humanity’s Last Exam, or HLE, an expert-level benchmark spanning eight diverse academic subjects like mathematics and biology. The entire system was powered by Gemini-3.1-Flash acting as the underlying frozen language model.

The system was compared against a Read-Write baseline that retrieves skills and collects feedback but doesn’t have self-evolving features. The researchers also tested their custom skill router against standard semantic retrieval baselines, including BM25 and Qwen3 embeddings.

Memento-skills performance

Performance on the GAIA benchmark (Memento-Skills vs Read-Write) (source: arXiv)

Advertisement

The results proved that actively self-evolving memory vastly outperforms a static skill library. On the highly diverse GAIA benchmark, Memento-Skills improved test set accuracy by 13.7 percentage points over the static baseline, achieving 66.0% compared to 52.3%. On the HLE benchmark, where the domain structure allowed for massive cross-task skill reuse, the system more than doubled the baseline’s performance, jumping from 17.9% to 38.7%.

Moreover, the specialized skill router of Memento-Skills avoids the classic retrieval trap where an irrelevant skill is selected simply because of semantic similarity. Experiments show that Memento-Skills boosts end-to-end task success rates to 80%, compared to just 50% for standard BM25 retrieval.

The researchers observed that Memento-Skills manages this performance through highly organic, structured skill growth. Both benchmark experiments started with just five atomic seed skills, such as basic web search and terminal operations. On the GAIA benchmark, the agent autonomously expanded this seed group into a compact library of 41 skills to handle the diverse tasks. On the expert-level HLE benchmark, the system dynamically scaled its library to 235 distinct skills. 

Memento-skills skill development

Memento-Skills starts with a seed of skills (stars) and develops more skills (circles) as it solves tasks (source: arXiv)

Advertisement

Finding the enterprise sweet spot

The researchers have released the code for Memento-Skills on GitHub, and it is readily available for use.

For enterprise architects, the effectiveness of this system depends on domain alignment. Instead of simply looking at benchmark scores, the core business tradeoff lies in whether your agents are handling isolated tasks or structured workflows.

“Skill transfer depends on the degree of similarity between tasks,” Wang said. “First, when tasks are isolated or weakly related, the agent cannot rely on prior experience and must learn through interaction.” In such scattershot environments, cross-task transfer is limited. “Second, when tasks share substantial structure, previously acquired skills can be directly reused. Here, learning becomes more efficient because knowledge transfers across tasks, allowing the agent to perform well on new problems with little or no additional interaction.”

Given that the system requires recurring task patterns to consolidate knowledge, enterprise leaders need to know exactly where to deploy this today and where to hold off.

Advertisement

“Workflows are likely the most appropriate setting for this approach, as they provide a structured environment in which skills can be composed, evaluated, and improved,” Wang said.

However, he cautioned against over-deployment in areas not yet suited for the framework. “Physical agents remain largely unexplored in this context and require further investigation. In addition, tasks with longer horizons may demand more advanced approaches, such as multi-agent LLM systems, to enable coordination, planning, and sustained execution over extended sequences of decisions.”

As the industry moves toward agents that autonomously rewrite their own production code, governance and security remain paramount. While Memento-Skills employs foundational safety rails like automatic unit-test gates, a broader framework will likely be needed for enterprise adoption.

“To enable reliable self-improvement, we need a well-designed evaluation or judge system that can assess performance and provide consistent guidance,” Wang said. “Rather than allowing unconstrained self-modification, the process should be structured as a guided form of self-development, where feedback steers the agent toward better designs.”

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025