Connect with us
DAPA Banner

Tech

Military Drone Insights for Safer Self-Driving Cars

Published

on

Self-driving cars often struggle with with situations that are commonplace for human drivers. When confronted with construction zones, school buses, power outages, or misbehaving pedestrians, these vehicles often behave unpredictably, leading to crashes or freezing events, causing significant disruption to local traffic and possibly blocking first responders from doing their jobs. Because self-driving cars cannot successfully handle such routine problems, self-driving companies use human babysitters to remotely supervise them and intervene when necessary.

This idea—humans supervising autonomous vehicles from a distance—is not new. The U.S. military has been doing it since the 1980s with unmanned aerial vehicles (UAVs). In those early years, the military experienced numerous accidents due to poorly designed control stations, lack of training, and communication delays.

As a Navy fighter pilot in the 1990s, I was one of the first researchers to examine how to improve the UAV remote supervision interfaces. The thousands of hours I and others have spent working on and observing these systems generated a deep body of knowledge about how to safely manage remote operations. With recent revelations that U.S. commercial self-driving car remote operations are handled by operators in the Philippines, it is clear that self-driving companies have not learned the hard-earned military lessons that would promote safer use of self-driving cars today.

While stationed in the Western Pacific during the Gulf War, I spent a significant amount of time in air operations centers, learning how military strikes were planned, implemented and then replanned when the original plan inevitably fell apart. After obtaining my PhD, I leveraged this experience to begin research on the remote control of UAVs for all three branches of the U.S. military. Sitting shoulder-to-shoulder in tiny trailers with operators flying UAVs in local exercises or from 4000 miles away, my job was to learn about the pain points for the remote operators as well as identify possible improvements as they executed supervisory control over UAVs that might be flying halfway around the world.

Advertisement

Supervisory control refers to situations where humans monitor and support autonomous systems, stepping in when needed. For self-driving cars, this oversight can take several forms. The first is teleoperation, where a human remotely controls the car’s speed and steering from afar. Operators sit at a console with a steering wheel and pedals, similar to a racing simulator. Because this method relies on real-time control, it is extremely sensitive to communication delays.

The second form of supervisory control is remote assistance. Instead of driving the car in real time, a human gives higher-level guidance. For example, an operator might click a path on a map (called laying “breadcrumbs”) to show the car where to go, or interpret information the AI cannot understand, such as hand signals from a construction worker. This method tolerates more delay than teleoperation but is still time-sensitive.

Five Lessons From Military Drone Operations

Over 35 years of UAV operations, the military consistently encountered five major challenges during drone operations which provide valuable lessons for self-driving cars.

Latency

Latency—delays in sending and receiving information due to distance or poor network quality—is the single most important challenge for remote vehicle control. Humans also have their own built-in delay: neuromuscular lag. Even under perfect conditions, people cannot reliably respond to new information in less than 200–500 milliseconds. In remote operations, where communication lag already exists, this makes real-time control even more difficult.

Advertisement

In early drone operations, U.S. Air Force pilots in Las Vegas (the primary U.S. UAV operations center) attempted to take off and land drones in the Middle East using teleoperation. With at least a two-second delay between command and response, the accident rate was 16 times that of fighter jets conducting the same missions . The military switched to local line-of-sight operators and eventually to fully automated takeoffs and landings. When I interviewed the pilots of these UAVs, they all stressed how difficult it was to control the aircraft with significant time lag.

Self-driving car companies typically rely on cellphone networks to deliver commands. These networks are unreliable in cities and prone to delays. This is one reason many companies prefer remote assistance instead of full teleoperation. But even remote assistance can go wrong. In one incident, a Waymo operator instructed a car to turn left when a traffic light appeared yellow in the remote video feed—but the network latency meant that the light had already turned red in the real world. After moving its remote operations center from the U.S. to the Philippines, Waymo’s latency increased even further. It is imperative that control not be so remote, both to resolve the latency issue but also increase oversight for security vulnerabilities.

Workstation Design

Poor interface design has caused many drone accidents. The military learned the hard way that confusing controls, difficult-to-read displays, and unclear autonomy modes can have disastrous consequences. Depending on the specific UAV platform, the FAA attributed between 20% and 100% of Army and Air Force UAV crashes caused by human error through 2004 to poor interface design.

UAV crashes (1986-2004) caused by human factors problems, including poor interface and procedure design. These two categories do not sum to 100% because both factors could be present in an accident.

Advertisement

Human Factors Interface Design Procedure Design
Army Hunter 47% 20% 20%
Army Shadow 21% 80% 40%
Air Force Predator 67% 38% 75%
Air Force Global Hawk 33% 100% 0%

Many UAV aircraft crashes have been caused by poor human control systems. In one case, buttons were placed on the controllers such that it was relatively easy to accidentally shut off the engine instead of firing a missile. This poor design led to the accidents where the remote operators inadvertently shut the engine down instead of launching a missile.

The self-driving industry reveals hints of comparable issues. Some autonomous shuttles use off-the-shelf gaming controllers, which—while inexpensive—were never designed for vehicle control. The off-label use of such controllers can lead to mode confusion, which was a factor in a recent shuttle crash. Significant human-in-the-loop testing is needed to avoid such problems, not only prior to system deployment, but also after major software upgrades.

Operator Workload

Drone missions typically include long periods of surveillance and information gathering, occasionally ending with a missile strike. These missions can sometimes last for days; for example, while the military waits for the person of interest to emerge from a building. As a result, the remote operators experience extreme swings in workload: sometimes overwhelming intensity, sometimes crushing boredom. Both conditions can lead to errors.

When operators teleoperate drones, workload is high and fatigue can quickly set in. But when onboard autonomy handles most of the work, operators can become bored, complacent, and less alert. This pattern is well documented in UAV research.

Advertisement

Self-driving car operators are likely experiencing similar issues for tasks ranging from interpreting confusing signs to helping cars escape dead ends. In simple scenarios, operators may be bored; in emergencies—like driving into a flood zone or responding during a citywide power outage—they can become quickly overwhelmed.

The military has tried for years to have one person supervise many drones at once, because it is far more cost effective. However, cognitive switching costs (regaining awareness of a situation after switching control between drones) result in workload spikes and high stress. That coupled with increasingly complex interfaces and communication delays have made this extremely difficult.

Self-driving car companies likely face the same roadblocks. They will need to model operator workloads and be able to reliably predict what staffing should be and how many vehicles a single person can effectively supervise, especially during emergency operations. If every self-driving car turns out to need a dedicated human to pay close attention, such operations would no longer be cost-effective.

Training

Early drone programs lacked formal training requirements, with training programs designed by pilots, for pilots. Unfortunately, supervising a drone is more akin to air traffic control than actually flying an aircraft, so the military often placed drone operators in critical roles with inadequate preparation. This caused many accidents. Only years later did the military conduct a proper analysis of the knowledge, skills, and abilities needed to conduct safe remote operations, and changed their training program.

Advertisement

Self-driving companies do not publicly share their training standards, and no regulations currently govern the qualifications for remote operators. On-road safety depends heavily on these operators, yet very little is known about how they are selected or taught. If commercial aviation dispatchers are required to have formal training overseen by the FAA, which are very similar to self-driving remote operators, we should hold commercial self-driving companies to similar standards.

Contingency Planning

Aviation has strong protocols for emergencies including predefined procedures for lost communication, backup ground control stations, and highly reliable onboard behaviors when autonomy fails. In the military, drones may fly themselves to safe areas or land autonomously if contact is lost. Systems are designed with cybersecurity threats—like GPS spoofing—in mind.

Self-driving cars appear far less prepared. The 2025 San Francisco power outage left Waymo vehicles frozen in traffic lanes, blocking first responders and creating hazards. These vehicles are supposed to perform “minimum-risk maneuvers” such as pulling to the side—but many of them didn’t. This suggests gaps in contingency planning and basic fail-safe design.

The history of military drone operations offers crucial lessons for the self-driving car industry. Decades of experience show that remote supervision demands extremely low latency, carefully designed control stations, manageable operator workload, rigorous, well-designed training programs, and strong contingency planning.

Advertisement

Self-driving companies appear to be repeating many of the early mistakes made in drone programs. Remote operations are treated as a support feature rather than a mission-critical safety system. But as long as AI struggles with uncertainty, which will be the case for the foreseeable future, remote human supervision will remain essential. The military learned these lessons through painful trial and error, yet the self-driving community appears to be ignoring them. The self-driving industry has the chance—and the responsibility—to learn from our mistakes in combat settings before it harms road users everywhere.

From Your Site Articles

Related Articles Around the Web

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

OpenAI is shutting down Sora, its powerful AI video model, app and API

Published

on

OpenAI is shuttering Sora, its stand-alone AI video generation app and social network, and the availability for developers to access the Sora 2 video model family through its application programming interface (API) to rely on it for their own products or video generation pipelines.

The announcement came abruptly this afternoon with OpenAI posting a message on X and not giving an exact shutdown date for the services, instead promising “timelines for the app and API and details on preserving your work.”

Sora wowed the world with its highly realistic scene-crafting when it was first previewed by OpenAI in February 2024, more than two years ago now, only to be released to mixed reception as an updated Sora Turbo model 10 months later, at which point, many other competing video AI model providers such as Runway, Luma, and Chinese AI companies Kling and Minimax had already shipped impressive rivals.

But OpenAI seemed intent on continuing to build out the video model and enabling creators until just now, releasing a Sora 2 model over API and apps for iOS, then Android in the latter half of 2025 — and the iOS app briefly hit number one in downloads on the Apple App Store. The application itself was designed to be a social network where users could insert AI generated lifelike versions of themselves and their friends into videos.

Advertisement

The company released updates to Sora on a regular cadence all the way through this week, making the news of the shutdown even more abrupt.

And Sora was even so enticing for a while that entertainment giant Disney pledged a $1 billion equity investment deal with OpenAI announced in December 2025, just four months ago, to bring popular Disney characters to Sora, allowing users to generate new videos with them that Disney planned to share through Disney+, its streaming TV service. As that announcement read:

“Under the license, fans will be able to watch curated selections of Sora-generated videos on Disney+, and OpenAI and Disney will collaborate to utilize OpenAI’s models to power new experiences for Disney + subscribers, furthering innovative and creative ways to connect with Disney’s stories and characters. Sora and ChatGPT Images are expected to start generating fan-inspired videos with Disney’s multi-brand licensed characters in early 2026”

The Hollywood Reporter states the Disney deal has now been canceled. We’ve reached out to OpenAI for more information and will update if and when we receive it.

The move comes as OpenAI has openly stated its intent to focus on building a “super app” that would fold in some or all of the capabilities of its various products including chatbot ChatGPT, AI coding model and application Codex, and others into one interface. Reports from The Wall Street Journal and other outlets suggest the strategy shift is an effort at refocusing the entire company to take on the rise of competitor Anthropic and its Claude family, especially with regards to enterprises and software developers — as Claude has seen rapid adoption and enterprise usage in the last few months, according to data from SimilarWeb and Ramp, driven by its prowess at coding and autonomously completing digital tasks.

Advertisement

The news of Sora’s demise came amid a restructuring of OpenAI’s leadership and non-profit Foundation arm and a promise for the latter to invest $1 billion “across life sciences and curing diseases, jobs and economic impact, AI resilience, and community programs,” suggesting a shift in focus away from AI-generated content and media.

Source link

Continue Reading

Tech

How to Use Apple’s Live Translation on Your AirPods

Published

on

At the same time Apple announced the AirPods Pro 3 last year, the company also introduced a new feature called Live Translation. It makes the idea of the Babel fish, so evocatively described in The Hitchhiker’s Guide to the Galaxy, a reality: Something sitting in your ear that can instantly translate between languages, on demand.

Rather than an exotic fish though, here we have wireless headphones from Apple. This isn’t actually exclusive to the latest earbuds, despite the launch timing—it’ll work with the AirPods 4 with active noise cancellation and the AirPods Pro 2, as well as the AirPods Pro 3. It also works with the recently updated AirPods Pro Max, but just the 2026 version, not the original version of the Max.

There are some requirements on the iPhone side too: an iPhone 15 Pro or iPhone 15 Pro Max, or any iPhone 16 or iPhone 17 model, are needed. You also have to have downloaded and installed the latest iOS 26 software update on your phone.

With those prerequisites out of the way, you can get your very own instant language interpreter in your ears. From FaceTime video calls to foreign trips, it’s a feature that can be hugely useful in understanding others and making yourself understood.

Advertisement

Set Up Live Translation

Image may contain Advertisement Poster and Electronics

Download the languages you need in advance.

Courtesy of Apple

This works via Apple Intelligence, so as well as having checked all the requirements that are mentioned above, you also need to turn on your iPhone’s AI capabilities if they’re not enabled already. From Settings in iOS, tap Apple Intelligence & Siri, and make sure the Apple Intelligence toggle is switched on.

Next, you need to specify the languages you want to work with. You need both the languages you’re translating from and the languages you’re translating too, so make sure you’re fully covered before you try and strike up a conversation.

We’re assuming you’ve already been through the AirPods setup process and they’re connected to your iPhone. Pop them in (or on) your ears and find the AirPods in the iOS Settings menu. Tap on the name you’ve given the headphones, then Languages to download whichever ones you need.

Advertisement

Apple says that all the processing required for Live Translation is handled privately on your phone. Your conversations aren’t being piped back to Apple’s servers so they can be translated between languages. (It’s one of the reasons you need a newer iPhone and newer headphones, so the necessarily work can be done on-device).

Use Live Translation

Image may contain Text

You can launch the feature from the Live tab of the Translate app.

Courtesy of Apple

With all the setup out of the way, you can get started with a Live Translation in a few ways. You can head to the Apple Translate app on your iPhone, tap the Live button at the bottom, choose the relevant languages, and then select Start Conversation.

Source link

Advertisement
Continue Reading

Tech

Microsoft exec Charles Lamanna on how AI is creating an expensive new request from job candidates

Published

on

Work perks are taking on new meaning in the AI boom.

Speaking at GeekWire’s Agents of Transformation event in Seattle on Tuesday, Microsoft EVP Charles Lamanna talked about a job candidate who said they would come aboard as long as their team was given a certain dollar amount of AI tokens — the fuel that powers interactions with AI systems.

Lamanna didn’t reveal the exact dollar amount request, but said “you should think of $100 to hundreds of dollars of token cost per day, at the limit.”

The anecdote reflects how access to AI models is becoming as fundamental as salary — and how quickly AI is moving from experimentation to a core part of day-to-day work.

Advertisement

If a “fully loaded” (total cost of an employee to a company) engineer costs $500,000 a year and the employee asks for $100,000 worth of tokens — which makes them three times as efficient — Lamanna said it’s a great deal for everyone involved.

He compared denying engineers sufficient AI resources to stripping away basic workplace tools. Imagine showing up to work with no mouse, no email, no Microsoft Teams — that’s how an engineer accustomed to AI-powered coding agents would feel working under a tight token budget, he said.

“So how you think about what it means to hire, and fully loaded cost, and where we invest is going to change completely as a result of this,” said Lamanna, Microsoft’s executive vice president of Business Applications & Agents. He sees this happening beyond software engineering — to multiple other forms of office and information work, such as financial planning.

“They’ll be like, I’m not going to work there unless I actually get a certain amount of token budget,” he said.

Advertisement

Lamanna isn’t alone in seeing this shift. Nvidia CEO Jensen Huang last week said AI tokens would become “one of the recruiting tools in Silicon Valley,” CNBC reported. In a blog post last month, venture capitalist Tomasz Tunguz described inference costs as a potential fourth pillar of engineer compensation alongside salary, bonuses, and equity. “Will you be paid in tokens?” Tunguz wrote. “In 2026, you likely will start to be.”

The New York Times last week reported on how employees at tech companies are competing on internal leaderboards that track token consumption, creating a new status game called “tokenmaxxing.”

Source link

Advertisement
Continue Reading

Tech

Amazon Dropped DeWalt 20V Battery Prices Up To 70% Ahead Of The Big Spring Sale

Published

on





We may receive a commission on purchases made from links.

Big news for anyone who swears by DeWalt tools — Amazon is now offering the DeWalt 20V Max 2.0 Ah battery for just $36, a 73% discount. Originally $131, this rechargeable battery works with DeWalt’s range of 20V Max tools, which includes garage essentials like cordless drill/drivers, wrenches, grinders, nailers, and flashlights. 

Advertisement

The DeWalt 20V Max battery is great to grab, even if you already have a few handy. At under one pound, it’s a lightweight, small battery that can be easily fit into your tool bag as a backup. DeWalt claims that it charges fully in 35 minutes and has a 33% longer battery life than other batteries of its size. Amazon reviews are generally quite positive, with many satisfied customers. If you’ve been thinking of getting one (or more) for your DeWalt tools, this seems to be a perfect time to do so. This massive discount comes just ahead of Amazon’s Spring Sale 2026 event, which officially kicks off on March 25.

Advertisement

When is Amazon’s 2026 Big Spring Sale?

Amazon’s first big savings event of the year goes live very soon. The online retailer’s Big Spring Sale goes live on March 25 and runs until March 31, giving you seven days to take advantage of some big discounts. Unlike Prime Day and Black Friday, the Big Spring Sale is more focused on cleaning items, household essentials, and clothing instead of tech. This coincides with the “spring cleaning” theme of the season. 

You don’t need a Prime membership to access the discounts, but some items will have a “Prime Spring Deal” badge that indicates exclusive savings. Despite not having officially started yet, Amazon is already offering some deals on DeWalt products. This includes not just this DeWalt 20V Max 2.0 Ah battery, but also a 20V Max combo kit that’s nearly 50% off and a DeWalt Atomic 20V Max cordless impact driver that’s 27% off — all of which are compatible with this battery, by the way. 

DeWalt has a history of offering discounts on tools during Amazon’s Big Spring Sale, and this year should be the same. So, if you’re looking for DeWalt tools to add to your kit (or need batteries for those tools), it looks like a great time to buy.

Advertisement



Source link

Continue Reading

Tech

Hong Kong Police Can Demand Passwords Under New National Security Rules

Published

on

An anonymous reader quotes a report from the BBC: Hong Kong police can now demand phone or computer passwords from those who are suspected of breaching the wide-ranging National Security Law (NSL). Those who refuse could face up to a year in jail and a fine of up to $12,700, and individuals who provide “false or misleading information” could face up to three years in jail. It comes as part of new amendments to a bylaw under the NSL that the government gazetted on Monday.

The NSL was introduced in Hong Kong in 2020, in wake of massive pro-democracy protests the year before. Authorities say the laws, which target acts like terrorism and secession, are necessary for stability — but critics say they are tools to quash dissent. The new amendments also give customs officials the power to seize items that they deem to “have seditious intention.”

Monday’s amendments ensure that “activities endangering national security can be effectively prevented, suppressed and punished, and at the same time the lawful rights and interests of individuals and organizations are adequately protected,” Hong Kong authorities said on Monday. Changes to the bylaw was announced by the city’s leader, John Lee, bypassing the city’s legislative council. The NSL also allows for some trials to be heard behind closed doors.

Source link

Advertisement
Continue Reading

Tech

Testing Expensive Graphene-Reinforced Nylon Filament

Published

on

Although usually nylon (generally PA6) filament is pretty cheap, there are some more exotic variants out there, such as the PA12-based Lyten 3D graphene filament that comes in at a cool $150 for a 1 kg spool. Worse for [Dr. Igor Gaspar] here was that the company doesn’t ship to the EU, and didn’t respond to emails about obtaining a sample for testing. Fortunately he got a spool via a different route, so that he could test whether this is the strongest nylon filament or not.

The full name for this filament is PA1205, though it’s not certain what the ’05’ part stands for. PA12 is a less moisture-sensitive version of PA6, however. Among the manufacturer’s claims are that it’s the strongest nylon filament, as well as very lightweight and heat-resistant. Interestingly the datasheet recommends printing with an 0.6 mm nozzle, which is the only major deviation from typical nylon FDM filaments. Of course, printing with an 0.4 mm nozzle had to be tried.

With a standard PA-CF preset in Bambu Lab’s slicer the printing of test parts worked without issues, which was promising. With load testing the filament made a good showing compared to average PA filaments, though as with most fiber reinforced filaments it’s more brittle than the pure material. Compared to PA-CF this PA1205 was much less brittle than PA-CF, however. Overall it’s not a bad filament, but for the asking price it’s a tough ask.

Advertisement

Source link

Advertisement
Continue Reading

Tech

National Survey of Parents Identifies Barriers to Family Well-Being

Published

on

A new survey shows households with children under age 18 are experiencing economic strain, with parents suffering from depression, burnout and hopelessness.

Capita launched the new national survey, Quarterly Insights from American Families, in partnership with YouGov. The survey will be conducted quarterly.

“This is the baseline,” said Elliot Haspel, a senior fellow with Capita. “We really want to be able to ask questions that serve as an early warning system for family well-being.”

Haspel said what stood out to him from the survey is “how much parents are facing precarity right now… I think that it tells us that families are really struggling and they really need support.”

Advertisement

The questions

YouGov, on behalf of Capita, surveyed 1,000 parents with children under age 18 between Feb. 2 and Feb. 16, 2026. North Carolina is one of four states that were oversampled in the survey, meaning the results are especially representative of those facing parents in those states. (The others are Colorado, Michigan and New Jersey.)

The survey consists of 69 questions (available here) designed to track families across three dimensions: stability, predictability, and quality of life. Capita defines the question underlying each dimension:

  • Stability: Can families meet basic needs without falling into crisis?
  • Predictability: Can they plan their lives without constant disruption?
  • Quality of life: Do they have the time, health, and connection to flourish, not just survive?

Haspel explained that this survey is meant to fill the gap between surveys such as RAPID, which focuses on parents and caregivers of young children, and surveys of all Americans more broadly.

He said two-thirds of the survey questions will remain the same each time, and another third will shift based on Capita’s specific areas of interest at a given moment.

Haspel pointed out that for all Americans, life can be stressful, and parenting in particular will always come with its own stressors.

Advertisement

“The issue is, what are the artificial, unnecessary stressors that we put on families as a result of policy choices?” Haspel said.

The answers

One of the main findings from the survey revolves around the economic pressure that families are facing. As the Capita report puts it: “Multiple indicators point to significant and widespread financial stress.”

Here are some of those indicators:

  • More than a third were worried at some point in the last year that food would run out before they had money to buy more — and almost as many actually had that happen.
  • One in 5 reported skipping out on needed medical care due to costs in the last year, and 15 percent skipped filling a prescription for the same reason.
  • In the last three months, 20 percent of households reported a member losing a job or having their hours cut.
  • In the last month, 25 percent of respondents said they had a shift canceled, shortened, or extended with less than 24 hours’ notice. The same percentage were required to be “on call” — available without guaranteed hours — during that period.

Financial stress can be a leading driver of “toxic stress.” This compounding, long-term stress can do permanent damage to the health of parents and the development of children — and can sometimes lead to adverse childhood experiences.

Evidence shows that safe, stable and nurturing relationships with adults can protect children from the negative outcomes of adverse childhood experiences and toxic stress. But the survey suggests most parents are struggling to maintain that kind of relationship with their children.

Advertisement

Two-thirds of respondents said that in the last month, stress made it hard to be as patient with their children as they wanted to be. And half of parents reported feeling down, depressed or hopeless in the last two weeks.

There are several questions in the survey that pertain specifically to work and child care. Here are some related findings:

  • More than 70 percent of respondents describe their job as family friendly.
  • Almost two-thirds said family life is a top priority, and they want their job to fit around it.
  • In the last year, 27 percent of respondents missed work or lost pay because of child care problems.
  • One in 5 parents regularly supervise their children while working.

Despite the challenges presented by scheduling, about 70 percent of parents report being satisfied with their existing child care situation, whether they have children who are school age or below. And 81 percent said their communities are welcoming to families with minor children.

But 43 percent said their work schedules made it hard to keep consistent routines for their children, and that matters.

“That lack of control over one’s schedule contributes to lack of control over one’s life more broadly, and it can affect parenting relationships,” Haspel said.

Advertisement

As the Capita report explains:

Volatile schedules make it hard for people to be the kind of parents they want to be. They may have to forego baseball games or dance recitals they planned to attend, skip sitting down to dinner as a family, or miss tucking their kids into bed. Instability also has a significant impact on child development. Consistent routines are the foundation for children’s growth, learning, and feelings of security. Chronically disrupting those routines not only stresses parents but also interferes with their children’s long-term trajectory. Inconsistent or nonstandard parent work schedules are associated with cognitive delays and behavioral outcomes, especially if they begin during a child’s first year of life.

“Job quality or schedule quality is often thought of as labor policy, it’s not thought of as a family policy,” Haspel said. “If you care about having strong, healthy families, this is a contributing factor.”

The meaning

While this first set of survey results represents the baseline of what Capita plans to measure over time, there are still significant takeaways from this early warning system.

“A lot of what we’ve been hearing around the issues with affordability, the issues with being able to navigate all the extra challenges of parenting in 2020s America is showing up in family well-being,” Haspel said.

Advertisement

Here’s what Capita has to say about the initial survey results:

This first survey of Quarterly Insights paints a troubling picture of families feeling economic strain and suffering from depression, burnout and hopelessness. These conditions reinforce one another, making it harder for parents to show up for their children, their partners, and themselves, maintain routines and flourish. Ultimately, all of these factors make stability feel perpetually out of reach. While the heaviest burdens often land on those earning the least, working-class and middle-class families also feel the enormous weight of these compounding pressures.

The report goes on to point out that policies supporting the well-being of children and families are most likely to succeed if they address multiple aspects of family hardship and reach all families who are affected.

Source link

Advertisement
Continue Reading

Tech

Arm Is Now Making Its Own Chips

Published

on

Arm, one of the world’s leading chip design firms, announced Tuesday that it is producing its own semiconductors. The move is a departure from its long-standing model of licensing intellectual property to companies that manufacture and sell chips themselves. Speaking to a live audience in San Francisco, Arm CEO Rene Haas made his pitch for how the new Arm CPU could benefit the tech industry and why this is the right time for the company to step outside of its lane and go head-to-head with other chipmakers.

“Let me be clear: We are now in a new business for ARM, and we are supplying CPUs,” Haas said, holding up one of the company’s new chips. Arm’s primary reason for moving in this direction, Haas said, is demand from customers.. But as artificial intelligence proliferates throughout the economy and demand for computing resources skyrockets, Arm is also trying to capture a sliver of the growing AI CPU market.

Arm’s in-house chip efforts had long been rumored, but now the company is finally offering a clearer picture of what it’s doing. The new chip is called the Arm AGI CPU, a nod to artificial general intelligence, an often-invoked but still hypothetical form of AI that could match human performance across domains. It’s designed to be coupled with other chips in high-performance servers inside data centers and to handle agentic AI tasks. The chip is being fabricated by Taiwan Semiconductor Manufacturing Corporation, the world’s leading semiconductor foundry, and is being built using TSMC’s 3nm process.

At the chip reveal event, Arm executives emphasized the company’s history of designing energy-efficient chips and claimed that its new AGI CPU will be the world’s “most efficient agentic CPU on the market.” Compared to competitors like the latest x86 chips made by Intel and AMD, Arm says this chip will deliver better performance per watt, or the amount of energy a computer uses to operate, and could save customers billions of dollars in electricity spending.

Advertisement

The first major customer of Arm’s new chip is Meta, which the company says has received samples of the CPU. OpenAI, SAP, Cerebras, and Cloudflare, as well as the Korean tech firms SK Telecom and Rebellions, have also agreed to buy the chip. Arm projects its AGI CPU will reach “full production availability” in the second half of this year.

Santosh Janardhan, Meta’s head of infrastructure, appeared on stage and said he thought the Arm chip was going to “expand the [chip] industry on multiple axes.” As Meta pushes toward “personal superintelligence”—AI that will make its apps deeply personalized—Janardhan said the company needs more silicon, and is especially interested in power efficiency.

OpenAI’s vice president of science and former chief product officer, Kevin Weil, also showed up on stage alongside Haas. “One of the most common things I hear inside of OpenAI: ‘I need more compute,’” Weil said. “It’s kind of the coin of the realm.”

Nvidia CEO Jensen Huang, Amazon senior vice president and distinguished engineer James Hamilton, and Google AI infrastructure chief Amin Vahdat appeared in pretaped video testimonials praising Arm’s new hardware. None committed to buying it, but all three tech giants already use Arm’s designs in their own processors.

Advertisement

Arm’s history traces back to the late 1970s, when it was known as Acorn and produced microprocessors. In the 1990s the entity changed its name to ARM (Advanced RISC Machines) and its then-CEO began licensing the firm’s chip designs to other companies. Arm, which has since dropped the all-caps “ARM” branding, saw its business boom during the mobile revolution. By the 2010s many of the world’s largest tech companies, including Apple, Nvidia, Microsoft, Amazon, Samsung, and Tesla, were all relying on its technology.

Source link

Continue Reading

Tech

Wine 11 Rewrites How Linux Runs Windows Games At the Kernel Level

Published

on

Linux gamers are seeing massive performance gains with Wine’s new NTSYNC support, “which is a feature that has been years in the making and rewrites how Wine handles one of the most performance-sensitive operations in modern gaming,” reports XDA Developers. Not every game will see a night-and-day difference, but for the games that do benefit from these changes, “the improvements range from noticeable to absurd.” Combined with improvements to Wayland, graphics, and compatibility, as well as a major WoW64 architecture overhaul, the release looks less like an incremental update and more like one of Wine’s most important upgrades in years. From the report: The numbers are wild. In developer benchmarks, Dirt 3 went from 110.6 FPS to 860.7 FPS, which is an impressive 678% improvement. Resident Evil 2 jumped from 26 FPS to 77 FPS. Call of Juarez went from 99.8 FPS to 224.1 FPS. Tiny Tina’s Wonderlands saw gains from 130 FPS to 360 FPS. As well, Call of Duty: Black Ops I is now actually playable on Linux, too. Those benchmarks compare Wine NTSYNC against upstream vanilla Wine, which means there’s no fsync or esync either. Gamers who use fsync are not going to see such a leap in performance in most games.

The games that benefit most from NTSYNC are the ones that were struggling before, such as titles with heavy multi-threaded workloads where the synchronization overhead was a genuine bottleneck. For those games, the difference is night and day. And unlike fsync, NTSYNC is in the mainline kernel, meaning you don’t need any custom patches or out-of-tree modules for it work. Any distro shipping kernel 6.14 or later, which at this point includes Fedora 42, Ubuntu 25.04, and more recent releases, will support it. Valve has already added the NTSYNC kernel driver to SteamOS 3.7.20 beta, loading the module by default, and an unofficial Proton fork, Proton GE, already has it enabled. When Valve’s official Proton rebases on Wine 11, every Steam Deck owner gets this for free.

All of this is what makes NTSYNC such a big deal, as it’s not simply a run-of-the-mill performance patch. Instead, it’s something much bigger: this is the first time Wine’s synchronization has been correct at the kernel level, implemented in the mainline Linux kernel, and available to everyone without jumping through hoops.

Source link

Advertisement
Continue Reading

Tech

TP-Link Tapo SolarCam C402 Outdoor Security Camera Makes Watching Your Home Effortless and Affordable

Published

on

TP-Link Tapo SolarCam C402 Outdoor Security Camera
Security cameras can be a real money pit because you have to deal with wiring, batteries that need to be replaced on a regular basis, and membership fees that add up every month. What if I told you there’s a way to bypass all that hassle for less than $60? The TP-Link Tapo SolarCam C402 Kit accomplishes exactly that, and at full price, it’s typically a good deal. When it is on sale (like now), you can get it for about $40.



Setting up the camera is simple, and anyone can do it in under an hour. Simply launch the Tapo app on your phone, scan the code on the camera, and connect it to your home Wi-Fi. Simply screw the accompanying bracket to a wall or eave using the template and hardware provided. If you want to get more out of your solar panel, simply clip it somewhere with adequate sunshine or extend the cable up to 13 feet. Simply put the camera onto the mount and plug the panel in. There is no need to consult an electrician, and no outlet is necessary; simply screw it into place.

Sale


Tapo SolarCam 1080p Outdoor Wireless Security Camera – Battery Power with Solar, Person Detection…
  • 【Endless Power from Solar Energy】The Tapo SolarCam C402 KIT harnesses the power of the sun to keep your security system running smoothly. The…
  • 【Easy Wire-Free Installation – Install Anywhere】Enjoy the flexibility of wire-free installation with the Tapo SolarCam C402 KIT. Free from outlet…
  • 【Prioritize What Matters】Eliminate unnecessary notifications by defining activity zones specifically monitoring for motion or people. Receive…


When daylight hits, the footage is remarkably vivid and clear, with a 125-degree field of vision. Faces can still be recognized from 20 to 30 feet away, and registration plates on your driveway read correctly. Any motion triggers will send you a message within seconds, and the app will even put a blue box around whatever moved so you can see exactly what caused the alert. The free built-in detection is really quite useful; it separates people, vehicles, and dogs from random movement, so you don’t get notified by every stray car that drives by.

Advertisement

TP-Link Tapo SolarCam C402 Outdoor Security Camera
At night, this camera has a unique trick up its sleeve. When movement begins, the two spotlights will instantly activate and highlight the entire area in color from roughly 30 feet out. The colors are little weaker than you might think, but they are still rather clear to read, especially when compared to the standard black and white infrared on most inexpensive devices. If you’re not utilizing the lights, simply turn them off in the app, and the device will fallback to regular infrared for times when it’s entirely dark outside. And whether you need to speak with the delivery guy or shoo away a stray animal, the two-way audio works perfectly.

TP-Link Tapo SolarCam C402 Outdoor Security Camera
The solar panel should keep the 6,400mAh battery charged after only 45 minutes of direct sunlight per day. Even if you don’t receive any sunlight, the battery should last up to 6 months, depending on your usage. Another benefit is that you won’t have to worry about storage space. Simply insert a microSD card (up to 512GB) to record private local videos. The program allows you to watch, download, and delete events directly from the timeline. If you want cloud backup, you can purchase the Tapo care plan, which includes 30 days of history and more advanced sorting. However, none of the basic alerts or live view require additional payment.

TP-Link Tapo SolarCam C402 Outdoor Security Camera
Then there’s voice control, as the Tapo app integrates perfectly with Alexa or Google Assistant, so you can simply ask your Echo Show to display the stream from the front porch, and it will appear. If you want to get even more creative, IFTTT recipes can do a variety of things, such as flashing a smart lamp when motion is detected or recording events to a spreadsheet.

Source link

Continue Reading

Trending

Copyright © 2025