Connect with us
DAPA Banner

Tech

Microsoft lowers Game Pass Ultimate and PC prices, won't include next Call of Duty

Published

on


The Game Pass front page on Microsoft’s website now shows revised pricing for the service’s two most expensive plans. Although delaying the addition of new Call of Duty titles marks a reversal of the company’s earlier strategy, the expanded library introduced during last year’s major price increase remains intact.
Read Entire Article
Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Vercel breach exposes the OAuth gap most security teams cannot detect, scope or contain

Published

on

One employee at Vercel adopted an AI tool. One employee at that AI vendor got hit with an infostealer. That combination created a walk-in path to Vercel’s production environments through an OAuth grant that nobody had reviewed.

Vercel, the cloud platform behind Next.js and its millions of weekly npm downloads, confirmed on Sunday that attackers gained unauthorized access to internal systems. Mandiant was brought in. Law enforcement was notified. Investigations remain active. An update on Monday confirmed that Vercel collaborated with GitHub, Microsoft, npm, and Socket to verify that no Vercel npm packages were compromised. Vercel also announced it is now defaulting environment variable creation to “sensitive.” Next.js, Turbopack, AI SDK, and all Vercel-published npm packages remain uncompromised after a coordinated audit with GitHub, Microsoft, npm, and Socket.

Context.ai was the entry point. OX Security’s analysis found that a Vercel employee installed the Context.ai browser extension and signed into it using a corporate Google Workspace account, granting broad OAuth permissions. When Context.ai was breached, the attacker inherited that employee’s Workspace access, pivoted into Vercel environments, and escalated privileges by sifting through environment variables not marked as “sensitive.” Vercel’s bulletin states that variables marked sensitive are stored in a manner that prevents them from being read. Variables without that designation were accessible in plaintext through the dashboard and API, and the attacker used them as the escalation path.

CEO Guillermo Rauch described the attacker as “highly sophisticated and, I strongly suspect, significantly accelerated by AI.” Jaime Blasco, CTO of Nudge Security, independently surfaced a second OAuth grant tied to Context.ai’s Chrome extension, matching the client ID from Vercel’s published IOC to Context.ai’s Google account before Rauch’s public statement. The Hacker News reported that Google removed Context.ai’s Chrome extension from the Chrome Web Store on March 27. Per The Hacker News and Nudge Security, that extension embedded a second OAuth grant enabling read access to users’ Google Drive files.

Advertisement

Patient zero. A Roblox cheat and a Lumma Stealer infection

Hudson Rock published forensic evidence on Monday, reporting that the breach origin traces to a February 2026 Lumma Stealer infection on a Context.ai employee’s machine. According to Hudson Rock, browser history showed the employee downloading Roblox auto-farm scripts and game exploit executors. Harvested credentials included Google Workspace logins, Supabase keys, Datadog tokens, Authkit credentials, and the support@context.ai account. Hudson Rock identified the infected user as a core member of “context-inc,” Context.ai’s tenant on the Vercel platform, with administrative access to production environment variable dashboards.

Context.ai published its own bulletin on Sunday (updated Monday), disclosing that the breach affects its deprecated AI Office Suite consumer product, not its enterprise Bedrock offering (Context.ai’s agent infrastructure product, unrelated to AWS Bedrock). Context.ai says it detected unauthorized access to its AWS environment in March, hired CrowdStrike to investigate, and shut down the environment. Its updated bulletin then disclosed that the scope was broader than initially understood: the attacker also compromised OAuth tokens for consumer users, and one of those tokens opened the door to Vercel’s Google Workspace.

Dwell time is the detail that should concern security directors. Nearly a month separated Context.ai’s March detection from the Vercel disclosure on Sunday. A separate Trend Micro analysis references an intrusion beginning as early as June 2024 — a finding that, if confirmed, would extend the dwell time to roughly 22 months. VentureBeat could not independently reconcile that timeline with Hudson Rock’s February 2026 dating; Trend Micro did not respond to a request for comment before publication.

Where detection goes blind

Security directors can use this table to benchmark their own detection stack against the four-hop kill chain this breach exploited.

Advertisement

Kill Chain Hop

What Happened

Who Should Detect

Typical Coverage

Advertisement

Gap

1. Infostealer on employee device

Context.ai employee downloaded Roblox cheat scripts; Lumma Stealer harvested Workspace creds, Supabase/Datadog/Authkit keys.

EDR on endpoint; credential exposure monitoring.

Advertisement

Low. Device likely under-monitored. No stealer log monitoring at most orgs.

Most enterprises do not subscribe to infostealer intelligence feeds or correlate stealer logs against employee email domains.

2. AWS compromise at Context.ai

Attacker used harvested credentials to access Context.ai’s AWS. Detected in March.

Advertisement

Context.ai cloud security; AWS CloudTrail.

Partially detected. Context.ai stopped AWS access but missed OAuth token exfiltration.

Initial investigation did not identify OAuth token exfiltration. Scope was underestimated until Vercel disclosure.

3. OAuth token theft into Vercel Workspace

Advertisement

Compromised OAuth token used to access a Vercel employee’s Google Workspace. Employee had granted “Allow All” permissions via Chrome extension.

Google Workspace audit logs; OAuth app monitoring; CASB.

Very low. Most orgs do not monitor third-party OAuth token usage patterns.

No approval workflow intercepted the grant. No anomaly detection on OAuth token use from a compromised third party. This is the hop no one saw.

Advertisement

4. Lateral movement into Vercel production

Attacker enumerated non-sensitive env vars (accessible via dashboard/API), harvested customer credentials.

Vercel platform audit logs; behavioral analytics.

Moderate. Vercel detected the intrusion after the attacker accessed customer credentials.

Advertisement

Detection occurred after exfiltration, not before. Env var access by a compromised Workspace account did not trigger real-time alerting.

What’s confirmed vs. what’s claimed

Vercel’s bulletin confirms unauthorized access to internal systems, a limited subset of affected customers, and two IOCs tied to Context.ai’s Google Workspace OAuth apps. Rauch confirmed that Next.js, Turbopack, and Vercel’s open-source projects are unaffected.

Separately, a threat actor using the ShinyHunters name posted on BreachForums claiming to hold Vercel’s internal database, employee accounts, and GitHub and NPM tokens, with a $2M asking price. Austin Larsen, principal threat analyst at Google Threat Intelligence, assessed the claimant as “likely an imposter.” Actors previously linked to ShinyHunters have denied involvement. None of these claims has been independently verified.

Six governance failures the Vercel breach exposed

1. AI tool OAuth scopes go unaudited. Context.ai’s own bulletin states that a Vercel employee granted “Allow All” permissions using a corporate account. Most security teams have no inventory of which AI tools their employees have granted OAuth access to.

Advertisement

CrowdStrike CTO Elia Zaitsev put it bluntly at RSAC 2026: “Don’t give an agent access to everything just because you’re lazy. Give it access to only what it needs to get the job done.” Jeff Pollard, VP and principal analyst at Forrester, told Cybersecurity Dive that the attack is a reminder about third-party risk management concerns and AI tool permissions.

2. Environment variable classification is doing real security work. Vercel distinguishes between variables marked “sensitive” (stored in a manner that prevents reading) and those without that designation (accessible in plaintext through the dashboard and API). Attackers used the accessible variables as the escalation path. A developer convenience toggle determined the blast radius. Vercel has since changed its default: new environment variables now default to sensitive.

“Modern controls get deployed, but if legacy tokens or keys aren’t retired, the system quietly favors them,” Merritt Baer, CSO at Enkrypt AI and former Deputy CISO at AWS, told VentureBeat.

3. Infostealer-to-SaaS-to-supply-chain escalation chains lack detection coverage. Hudson Rock’s reporting reveals a kill chain that crossed four organizational boundaries. No single detection layer covers that chain. Context.ai’s updated bulletin acknowledged that the scope extended beyond what was initially identified during its CrowdStrike-led investigation.

Advertisement

4. Dwell time between vendor detection and customer notification exceeds attacker timelines. Context.ai detected the AWS compromise in March. Vercel disclosed on Sunday. Every CISO should ask their vendors: what is your contractual notification window after detecting unauthorized access that could affect downstream customers?

5. Third-party AI tools are the new shadow IT. Vercel’s bulletin describes Context.ai as “a small, third-party AI tool.” Grip Security’s March 2026 analysis of 23,000 SaaS environments found a 490% year-over-year increase in AI-related attacks. Vercel is the latest enterprise to learn this the hard way.

6. AI-accelerated attackers compress response timelines. Rauch’s assessment of AI acceleration comes from what his IR team observed. CrowdStrike’s 2026 Global Threat Report puts the baseline at a 29-minute average eCrime breakout time, 65% faster than 2024.

Security director action plan

Attack Surface

Advertisement

What Failed

Recommended Action

Owner

OAuth governance

Advertisement

Context.ai held broad “Allow All” Workspace permissions. No approval workflow intercepted.

Inventory every AI tool OAuth grant org-wide. Revoke scopes exceeding least privilege. Check both Vercel IOCs now.

Identity / IAM

Env var classification

Advertisement

Variables not marked “sensitive” remained accessible. Accessibility became the escalation path.

Default to non-readable. Require a security sign-off to downgrade any variable to accessible.

Platform eng + security

Infostealer-to-supply-chain

Advertisement

Kill chain spanned Lumma Stealer, Context.ai AWS, OAuth tokens, Vercel Workspace, and production environments.

Correlate Infostealer intel feeds against employee domains. Automate credential rotation when creds surface in stealer logs.

Threat intel + SOC

Vendor notification lag

Advertisement

Nearly a month between Context.ai detection and Vercel disclosure.

Require 72-hour notification clauses in all contracts involving OAuth or identity integration.

Third-party risk / legal

Shadow AI adoption

Advertisement

One employee’s unapproved AI tool became the breach vector for hundreds of orgs.

Extend shadow IT discovery to AI agent platforms. Treat unapproved adoption as a security event.

Security ops + procurement

Lateral movement speed

Advertisement

Rauch suspects AI acceleration. Attacker compressed the access-to-escalation window.

Cut detection-to-containment SLAs below 29-minute eCrime average.

SOC + IR team

Run both IoC checks today

Search your Google Workspace admin console (Security > API Controls > Manage Third-Party App Access) for two OAuth App IDs.

Advertisement

The first is 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com, tied to Context.ai’s Office Suite.

The second is 110671459871-f3cq3okebd3jcg1lllmroqejdbka8cqq.apps.googleusercontent.com, tied to Context.ai’s Chrome extension and granting Google Drive read access.

If either touched your environment, you are in the blast radius regardless of what Vercel discloses next.

What this means for security directors

Forget the Vercel brand name for a moment. What happened here is the first major proof case that AI agent OAuth integrations create a breach class that most enterprise security programs cannot detect, scope, or contain. A Roblox cheat download in February led to production infrastructure access in April. Four organizational boundaries, two cloud providers, and one identity perimeter. No zero-day required.

Advertisement

For most enterprises, employees have connected AI tools to corporate Google Workspace, Microsoft 365 or Slack instances with broad OAuth scopes — without security teams knowing. The Vercel breach is the case study for what that exposure looks like when an attacker finds it first.

Source link

Continue Reading

Tech

Starbucks cuts tech jobs as new CTO reshapes organization

Published

on

Starbucks is cutting an unspecified number of tech jobs. (GeekWire File Photo)

Starbucks is cutting jobs in its technology organization, restructuring the team under a new chief technology officer who joined the coffee giant from Amazon four months ago.

Several affected employees posted about the cuts on LinkedIn on Tuesday afternoon, including people in program and product management and other technology-related roles. Starbucks declined to comment, and the number of people impacted is unclear as of now. 

The Seattle Times reported on the cuts earlier today, citing an internal message in which the company told employees it was “making structural changes to move faster, sharpen focus, and ensure we are set up to deliver on our most important priorities.”  

Anand Varadarajan joined Starbucks as chief technology officer in January after 19 years at Amazon, where he most recently ran tech and supply chain for its global grocery business. 

The restructuring comes as Starbucks pushes ahead with a broader turnaround under CEO Brian Niccol, who joined in 2024. It includes a series of technology initiatives from an AI-powered drink-ordering assistant to an algorithm that manages mobile order timing

Advertisement

The cuts appear to be unrelated to the company’s Nashville expansion. Following up on a prior announcement, Starbucks said Tuesday that it will invest $100 million in the new corporate office in Tennessee that will eventually employ up to 2,000 people.

Source link

Continue Reading

Tech

Home Depot Dropped LG Refrigerator Prices Up To 53% During Spring Black Friday Sale

Published

on





We may receive a commission on purchases made from links.

Home Depot’s Spring Black Friday Sale ends on April 22, but there is still time to make some last-minute splurges on a variety of appliances and tools — including Home Depot’s extensive collection of LG refrigerators. There are currently over 40 fridges on sale, with massive discounts up to 53% on popular models. 

Advertisement

The biggest sale is on the Energy Star-certified LG Counter-Depth Max, with the 53% discount bringing it down from $3,399 to $1,599. That’s $1,800 off. This model has 26 cubic feet of room with various compartments for storing food, a large 12.6-inch-tall ice and water dispenser, and LG’s ThinQ app to control temperature, track energy, and check the filter status. The latter is a handy extra, making LG one of SlashGear’s favorite smart fridge brands

Customers love the hidden handles that create an extra-sleek look, the fridge’s bright lighting, and spacious freezer — although some find the lack of door handles and shelf heights a bit awkward. It currently has a 4.5-star rating, which could make it a good candidate for a heavily discounted fridge if you’re in the market for one. It’s not the only LG fridge for sale, either: other highlights include a 29-cubic-foot Standard-Depth Max fridge that’s 42% off, and a 28-cubic-foot three-door French door fridge at 48% off.

Advertisement

What is Home Depot’s Spring Black Friday Sale?

Home Depot’s Spring Black Friday Sale ran a lot longer than just Friday — the sale started April 9th and will end April 22nd, the same 14-day timeframe the retailer has used for its spring sale for over a decade. It’s not really a “Black Friday” sale considering the length, but anyone with spring projects will most definitely welcome all the deals. 

Spring is a popular time for deals across a wide range of retailers, with the likes of Harbor Freight, Lowe’s, and Amazon also running their own seasonal discounts. This makes a lot of sense, since it’s often when people start DIY renovation projects and refresh their living space. Home Depot’s sale covers a wide range of products from major brands like DeWalt, Ryobi, Samsung, LG, Whirlpool, and Frigidaire, to name a few. Products on sale range from lawn and garden equipment — including useful gardening gadgets for spring — to patio furniture, kitchen appliances, and storage.

Advertisement



Source link

Advertisement
Continue Reading

Tech

Latest 'Star Wars' movie cut unnecessary costs by using Apple Vision Pro

Published

on

Director Jon Favreau says a specialized app let him better frame IMAX shots using a virtual theater environment in Apple Vision Pro. He cites it as one method to cut back on reshoots and reduce costs.

White Apple Vision Pro headset with dark reflective visor resting on a wooden table, connected by a thin cable to a small rectangular battery pack in soft indoor light
Apple Vision Pro could become a useful tool in filmmaking

Filmmaking has only become more and more expensive even as commercialized tools make the medium more accessible. It’s easier than ever to grab a smartphone and shoot some footage, but reaching Hollywood calibre isn’t so simple.
In an interview conducted by The Town podcast during Cinemacon, Jon Favreau discussed ways that technology was helping reduce costs in filmmaking. One of the tools he mentioned was Apple Vision Pro.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

My Smartwatch Gives Me Health Anxiety. Experts Explain How to Make It Stop

Published

on

I’m a wellness writer with health anxiety. Also known as hypochondria or illness anxiety disorder, health anxiety is a condition that makes me worry I am or may become ill even when I’m perfectly healthy. One minute, I have a headache, and the next, I think I’ve got a deadly brain tumor.

What’s ironic is that part of my job involves testing health-monitoring wearables, including fitness trackers and smart rings. While I love exploring this technology and do think it can help you learn more about your body, I have to be careful about how I use it so my anxiety isn’t triggered. I know I’m not alone

“Healthy adults and individuals with pre-existing medical conditions are increasingly using these devices to manage their health,” says Dr. Lindsey Rosman, assistant professor of medicine in the Division of Cardiology and co-director of the Cardiovascular Device and Data Science Lab at the University of North Carolina School of Medicine. “Whether 24/7 access to health information from a wearable actually helps or potentially harms people is really unclear.”

Advertisement

When you add in the ability to search your symptoms online or ask an AI chatbot in your wearable’s app every health question under the sun, it becomes even more difficult to discern between what’s helpful and harmful. 

To help myself and others with health anxiety navigate the world of wearables so we can either enjoy using them or know when it’s time to stop, I reached out to experts for their advice.

1. Turn off anxiety-inducing health alerts

Rosman has observed clinically that it can be beneficial to either scale back or turn off the features that make you anxious. This can be especially helpful for people with pre-existing conditions that are already being treated, such as atrial fibrillation (AFib, an irregular heartbeat), as your wearable’s irregular heart rhythm notifications will only make you anxious and can prompt you to see your doctor when it’s not medically necessary.

Plus, certain medications can affect the accuracy of wearable sensors, provoking false alarms. 

Advertisement

“We published a case report on a patient who performed over 900 EKGs [electrocardiograms or ECGs, which measure the heart’s electrical activity] on her smartwatch in a single year,” says Rosman. While most of the EKGs were normal, inconclusive alerts fueled her anxiety, leading to multiple ER visits, spousal conflict and the need for therapy to reclaim her daily life. The patient had no psychiatric history prior to getting a smartwatch.

An Apple Watch 11 showing the "Possible Hypertension" alert

When you get an unexpected health alert on your device, it can understandably cause panic.

Cole Kan/CNET/Apple

Dr. Karen Cassiday, author of Freedom from Health Anxiety and owner and managing director of the Anxiety Treatment Center of Greater Chicago, says that even patients who don’t have health anxiety can find wearables to be intrusive when they get too many alerts. “They discover they want to be less aware of every moment of their body’s functioning,” she says.

Advertisement

Thankfully, most wearable health features can be turned off completely or customized. 

For instance, Shyamal Patel, SVP of science at Oura, maker of the Oura Ring, shares that the device’s Personalized Activity Goals allow you to choose to see steps instead of calories, adjust your daily activity goal or hide calories completely, which can be necessary for anyone who finds calorie counting triggering or overly rigid. 

Advertisement

2. Avoid compulsively checking your smart device

Referring to a 2024 study she worked on that examined the impact of wearables on the psychological well-being of patients with AFib, Rosman says that about half of the participants were checking their heart rate every day out of habit, not because they felt symptoms. 

Cassidy explains that while people with health anxiety may initially find wearables helpful, compulsively checking to make sure their vitals are normal can accidentally become a form of negative reinforcement that further propels the anxiety.

“Often when I work with anxious people, we try to cut back or eliminate the need to compulsively check for reassurance on their wearables, as well as with ChapGPT or other digital ‘doctors,’” says Cassiday. 

When people refrain from compulsively checking, wearables can provide useful feedback that counters the false belief that something terrible will happen to their health.  

Advertisement

If checking your health metrics causes anxiety, try reducing how often you view them on your device or in its app. Setting an alert to check weekly, at a minimum, could help — especially since it’ll give you a broader picture, making you less likely to hyperfocus on a single data point that seems off. 

You should also avoid checking your wearable’s health information right after you wake up or before you go to bed, as this can set the tone for an anxious day or make it harder to fall asleep. 

If having a screen on your wrist makes it difficult for you to stop checking, a screenless smart ring or fitness tracker such as the Whoop 5.0 may be a better option, since they rely on apps instead of screens.

Advertisement
A close-up of the silver Oura Ring 4 on a pointer finger in front of a white wall.

A screenless smart ring may help you stop compusively checking your device.

Anna Gragert/CNET

“You choose how much or how little you engage with the app, which gives those who might be anxious about their health the option to limit the amount of time they spend with their data,” says Patel.

3. Focus on trends, not one-off metrics

When I asked both Patel and Dr. Jacqueline Shreibati, head of clinical for platforms and devices at Google, how people who wear their devices can reduce health anxiety, they emphasized the importance of tracking trends — not individual metrics.  

“We focus on long-term trends (rather than isolated metrics) to help users maintain a balanced relationship with their data,” says Shreibati. “What being healthy means differs for everyone, and we encourage users to consult their physician if they have any concerns.”

Advertisement

Patel points to the Tags and Trends features in the Oura app. Tags lets you tag lifestyle factors such as travel, alcohol, meditation or late meals, which you can then view in Trends to see how your behavior affects your recovery and sleep over weeks, rather than looking at a single score that may one day seem abnormal.

Sleet tracking Apple Watch Series 11

Instead of viewing a single sleep or stress score, consider looking at that data weekly or monthly.

Vanessa Hand Orellana/CNET

4. Remember: Your smartwatch can’t replace your doctor

“Most consumer wearables were originally developed as personal wellness devices, which are not required to demonstrate safety and efficacy like traditional medical devices (e.g., a blood pressure cuff or pacemaker),” Rosman explains. 

Advertisement

Yet we’ve begun using these wearables to monitor our health, using metrics such as heart rate and rhythm, blood oxygen, stress, sleep and physical activity. Now, some of these devices have medical-grade sensors, software and algorithms approved by the US Food and Drug Administration to detect irregular heart rhythms, hypertension and sleep apnea.

Despite FDA approval, wearables are simply not doctors, and they cannot provide medical diagnoses or treatment. That’s why it’s essential to understand what your device actually measures.

The ECG feature on many smartwatches is just one example of this. FDA-cleared as it may be, a single-lead ECG that only uses one electrode to record your heart’s electrical activity from your wrist is not the same as the 12-lead, hospital-grade ECG a cardiologist would use. 

While your wearable’s ECG can surface a potential symptom worth investigating with your doctor, it can’t replace a professional or their medical-grade equipment.

Advertisement
apple watch ultra 3 ecg

Performing an ECG on your smartwatch is not the same as having that same measurement taken in a doctor’s office.

Viva Tung/CNET/Apple

The gap is even wider for features including stress and sleep scores, which haven’t been clinically validated because there’s no one single gold standard to validate against. These numerical scores are calculated from bodily signals such as heart rate, temperature, movement and heart rate variability, which tend to correlate with your stress and sleep states. But the translation from raw signal to “your stress score is 74” is more of an educated estimate.

“What you’re seeing is a rough indicator of how your nervous system is functioning, not a medical diagnosis,” Rosman emphasizes.

Advertisement

Patel adds that not all physiological stress is inherently negative. “Some forms of short-term physiological stress can be healthy and adaptive,” he says. “That’s why we aim to pair data with in-app context and insights, so members can better understand what they’re seeing rather than receiving that information in a vacuum.” 

Nonetheless, when you don’t know exactly what your wearable is measuring, a “bad” stress or sleep score can seem scary when it isn’t necessarily a cause for alarm, but rather a sign that you may want to have a deeper conversation with your doctor.

5. Get a temperature check

Just like you should talk to your doctor before starting a new medication or diet, you should get their thoughts on whether you could benefit from using a wearable.

“Education is probably the most underused tool we have,” Rosman says. 

Advertisement

When you don’t know what a healthy heart rate or ECG looks like, one seemingly atypical reading can send you into a panic. That’s why it’s essential to speak with your doctor so you understand your own baseline and if a wearable makes sense for your current health condition.

As a guide, Rosman provides the following questions you can ask your doctor:

Advertisement
  • What type of wearable should I use? 
  • How often should I check this data? 
  • What are healthy numbers for me? 
  • What do I do when I get an alert? 
  • When should I call the clinic or seek emergency care versus waiting? 

“A fast heart rate after climbing stairs is not the same as a dangerous arrhythmia, but without that context, a notification can feel terrifying,” Rosman adds. “So much wearable-related anxiety comes not from the data itself, but from not knowing what to do with it.”

6. Know when it’s time to remove that device and get help

When asked when someone should consider parting with their wearable or seeing a professional for health anxiety, Cassiday says that it’s similar to what many notice when they keep checking their smartphone for the next text, TikTok or other digital data.  

“If you find yourself interrupting pleasurable activities or your free time to check, or if you feel anxious about not checking, you have a problem,” Cassiday states. 

For instance, if you only stop thinking that you’ll have a heart attack when you check your wearable and see your resting heart rate. Or, put simply, if you only feel at peace after someone or something, such as a wearable reassures you that you’re in good health, it’s time to get professional support. 

Advertisement
An aerial view of a version with blonde hair, a yellow shirt and light-wash jeans talking to a therapist while on a gray couch.

If health anxiety is making it difficult for you to enjoy your life, then it’s time to talk to a professional.

Constantinis/Getty Images

To find help, Cassiday recommends using the resources provided by the Anxiety and Depression Association of America or the International OCD Foundation, as health anxiety can be related to obsessive-compulsive disorder. 

7. Consider cognitive behavioral therapy 

When you have health anxiety, the gold standard for care is cognitive behavioral therapy. It involves exposure to health-related worries without any form of reassurance and learning to accept the uncertainty that comes with not knowing our future health status, manner of death or time of death.  

“People need to learn that all the vague symptoms that trigger their health anxiety are just normal variations of normal body functioning and aging,” Cassiday explains. “They have to reframe the symptoms they notice as nothing to examine, discuss or manage and instead trust the facts of their other evidence of good health.”

CBT can help you live in the present instead of spiraling into the anxiety-inducing “What if?” of the future.

Who should and shouldn’t use health-tracking wearables

Wearables can be great for people who like tracking their fitness to motivate them toward their goals, or for patients and their care teams when medically necessary. Though they usually cost hundreds of dollars, wearables can be less expensive than medical tests. Some are even HSA- or FSA-eligible

Advertisement

“In AFib specifically, being able to correlate your symptoms with actual rhythm data can be genuinely empowering,” Rosman says. She’s observed that the patients who thrive with wearables are those who use the data as information — not as something to fear — and those who don’t participate in 24/7 surveillance.

In Rosman’s 2024 study, two-thirds of AFib patients said their wearable made them feel safer and more in control. Even so, there is still the risk of unintended consequences.

Two fitness tracker watches and a gold Oura Ring on a wrist and finger.

While they can be beneficial, wearables can also come with risks — especially since there isn’t enough research on the subject.

Advertisement

Giselle Castro-Sloboda/CNET

Just as doctors would never prescribe a medication without knowing the potential benefits, risks and how to manage them, wearables should be no different. “The technology has moved so much faster than the science, and we need the scientific evidence from clinical trials to catch up,” Rosman explains. 

Since the evidence isn’t there yet, Rosman is hesitant to say anyone should categorically avoid wearables. 

Despite that, people who are highly anxious about their heart or prone to obsessive symptom monitoring should approach with caution. The same goes for those with conditions involving unpredictable, abrupt symptoms, such as paroxysmal AFib and POTS, because the uncertainty of not knowing when the next episode will hit is stressful enough, and constant monitoring can make it worse.

A note on the science (or lack thereof)

Rosman has conducted research on the connection between wearables and anxiety, including a 2025 review describing the psychological effects of wearables on patients with cardiovascular disease and a 2024 study examining their impact on the psychological well-being of patients with AFib. 

The 2025 review found that while wearables can help promote healthy behaviors and provide data for diagnosis and treatment, they also pose risks, such as adverse psychological reactions. 

In the 2024 study, it was concluded that wearables were connected with higher rates of patients becoming preoccupied with their symptoms, being concerned about their treatments and using both formal and informal health care resources.

Advertisement

On the other hand, a 2021 study that analyzed the 2019 and 2020 US-based Health Information National Trends Survey found that using wearable devices for self-tracking can indirectly reduce psychological distress. Still, misinterpretation of wearable data may cause unnecessary panic and anxiety. 

A 2020 qualitative interview study featuring patients with chronic heart disease also found that while wearables’ data may be a resource for self-care, it can create uncertainty, fear and anxiety.

Ultimately, more studies are needed. 

“Honestly, we don’t have good scientific evidence in this area yet,” says Rosman. “Despite widespread use, there have been no clinical trials I’m aware of that have looked at the benefits and potential health risks of specific wearable health features.”

Advertisement

Rosman’s team plans to be the first to investigate this in patients with pre-existing heart conditions.

Wearables’ impact on our health care system

When wearables cause health anxiety, they can prompt healthy individuals to schedule unnecessary doctor’s appointments. This places a burden on our health care system, which is already experiencing shortages, making it difficult for people who actually require medical attention to access care. 

Rosman’s 2024 study found that those using a wearable sent nearly twice as many patient portal messages to their doctors. Responding to these messages from patients takes time, isn’t reimbursed by insurance and can contribute to burnout.

Advertisement
A person in blue scrubs with long brown hair checking messages on a desktop computer.

When health anxiety caused by wearables prompts people to message their doctors, it can put a strain on the health care system.

MoMo Productions/Getty Images

As a result, Rosman believes we need better systems for managing wearable data in clinical settings before we scale it further: “Wearables are changing how we deliver care in ways we haven’t fully prepared for.”

Wearables can further widen health care inequity due to their cost. 

“These devices are expensive, they were mostly designed and tested in young healthy people and they’re marketed toward higher-income consumers,” Rosman explains. “If we’re not thoughtful about access, wearables could actually widen health disparities rather than close them. That’s the opposite of what we want.”

The bottom line

While wearables have their benefits, there are also risks to consider, especially given the limited research on the subject.

If you purchase a wearable and it triggers health anxiety, you don’t have to use every available feature, wear it constantly or continue to wear it at all. Before you even buy that device, you can arm yourself with anxiety-reducing knowledge by getting your doctor’s expert opinion.  

Advertisement

However, if health anxiety continues to take over your life, it may be time to remove your wearable and seek professional help. 

As for me, writing this piece has been a necessary reminder that, while there’s a lot we can’t control in life, the power is in our hands (or on our wrists or fingers) when it comes to the technology we put on our bodies or invite into our homes. Just like an itchy sweater or a lumpy armchair, we can send the technology that doesn’t serve us packing.  

Source link

Advertisement
Continue Reading

Tech

Does A Right Turn Traffic Light Mean ‘No Turn On Red’ In Florida?

Published

on





Traffic lights can be tricky, depending on where you go. The response you have to a red light at an intersection in one state may not be the same response you need at an intersection in another state. Turning right on red can even get you a ticket in some U.S. cites. But in Florida, a right turn traffic light may still allow a right turn after stopping. But there’s also a bit more to it than that.

First off, you must come to a complete stop at the red light. If you keep rolling through the turn instead, you could get a ticket. Next, if there are no posted warning signs at the light, Florida law says you can go ahead and turn right once it’s clear to do so. But if you have a sign warning you that there’s no turn on red, then you’re stuck. Stay where you are until you get the green light.

Similarly, if you have a red right arrow, you of course must fully stop then as well. But don’t let the arrow fool you, as it’s not an automatic signal that you can just turn once the way is clear. If there are no signs posted that say otherwise (such as a “No turn on red” sign), you may proceed after determining that it is safe to do so. This is the case whether you’re at an intersection or a crosswalk.

Advertisement

Crosswalks and malfunctioning traffic lights

If you come to a right turn traffic light at a crosswalk in Florida, keep in mind that you are expected to yield to any pedestrians who are crossing. Even if you’ve come to a complete stop and are otherwise allowed to turn, you must wait. If your light turns green and someone is still in the process of crossing, you should wait then as well. Additionally, if you’re at an intersection with sidewalks but no clearly marked crosswalk present, you still have to yield.

However, there could be times you arrive at a right turn traffic light that’s malfunctioning. Maybe it’s blinking, stuck, or completely dead. If this happens, Florida law states that you must treat it as a four-way stop sign. That means you must come to a complete stop and yield right of way to traffic coming from all directions. Of course, you must also yield to any pedestrians crossing in front of you. Once the way clears and you have an open right turn, you’re free to go. Always be cautious when arriving at a light that’s out of order and make sure the intersection is fully clear before you continue.

Advertisement



Source link

Advertisement
Continue Reading

Tech

Meta will record employees’ keystrokes and use it to train its AI models

Published

on

Meta has found a new source of training data for its AI models: its own employees. The company plans to use data culled from the mouse movements and keystrokes of its own staff in its pursuit to build more capable and efficient artificial intelligence.

The story, which was first reported by Reuters, shows the lengths to which tech companies are going to find new sources of training data — the lifeblood of AI models that helps the programs learn how to more effectively carry out tasks and respond to user queries.

When reached for comment by TechCrunch, a Meta spokesperson provided the following statement: “If we’re building agents to help people complete everyday tasks using computers, our models need real examples of how people actually use them — things like mouse movements, clicking buttons, and navigating dropdown menus. To help, we’re launching an internal tool that will capture these kinds of inputs on certain applications to help us train our models. There are safeguards in place to protect sensitive content, and the data is not used for any other purpose.”

This trend reveals a troublesome privacy dimension of the AI industry. Last week it was reported that old startups are being scavenged for their corporate communications (like Slack archives and Jira tickets), and converted into AI training data.

Advertisement

Source link

Continue Reading

Tech

Cash App now supports accounts for kids 6-12

Published

on

Cash App, the banking and payments app run by Block, has added support for parent-managed kids accounts. The new accounts include key benefits from the service’s normal account, with an eye towards teaching financial literacy to younger users ages 6 to 12. Cash App first allowed teenage users on its platform in 2021.

As part of the “expanded Cash App Families experience,” eligible legal guardians and parents can create managed accounts that offer “a dedicated place on the platform to send allowances, set aside savings, and track spending for their child, kickstarting their path to financial independence,” Cash App says. Adults managing these accounts will be able to set up recurring transfers, see how their child is spending and do things like lock their child’s account to prevent transactions. Kids will get a custom debit card and the ability to receive payments from up to five trusted accounts, though notably they won’t be able to access Cash App itself.

Cash App says managed accounts are designed for kids 6 through 12. Once those kids turn 13, Cash App says parents will be able to choose to convert their account to a “sponsored account” to unlock more features, like the ability to send and receive payments, invest in stocks or trade crypto. Those sponsored accounts are technically still monitored and controlled by a parent or legal guardian, but they do give 13-year-olds more control over how they use their money.

A parent-managed account for kids is not a new idea in the fintech space, though Cash App is trying to reach a younger audience than some of its competitors. Venmo rolled out access to its payment platform to teens between the ages of 13 to 17 in 2023. Separately, both Apple and Google also offer their own kids accounts in Google Wallet and Apple Cash Family.

Advertisement

Source link

Continue Reading

Tech

Florida Launches Criminal Investigation Into ChatGPT Over School Shooting

Published

on

Florida’s attorney general has launched a criminal investigation into OpenAI over allegations that the accused gunman in a shooting at Florida State University last year used ChatGPT to help plan the attack. OpenAI says the chatbot is “not responsible for this terrible crime” and only provided factual information available from public sources. NPR reports: The Republican attorney general, James Uthmeier, said at a press conference in Tampa on Tuesday that accused gunman Phoenix Ikner consulted ChatGPT for advice before the shooting, including what type of gun to use, what ammunition went with it, and what time to go to campus to encounter more people, according to an initial review of Ikner’s chat logs. “My prosecutors have looked at this and they’ve told me, if it was a person on the other end of that screen, we would be charging them with murder,” Uthmeier said. “We cannot have AI bots that are advising people on how to kill others.”

Uthmeier’s office is issuing subpoenas to OpenAI seeking information about its policies and internal training materials related to user threats of harm and how it cooperates with and reports crimes to law enforcement, dating back to March 2024. At the press conference, Uthmeier acknowledged the investigation is entering into uncharted territory and is uncertain about whether OpenAI has criminal liability. “We are going to look at who knew what, designed what, or should have done what,” he said. “And if it is clear that individuals knew that this type of dangerous behavior might take place, that these types of unfortunate, tragic events might take place, and nevertheless still turned to profit, still allowed this business to operate, then people need to be held accountable.”

[…] Ikner, 21, is facing multiple charges of murder and attempted murder for the April 2025 shooting near the student union on FSU’s Tallahassee campus, where he was a student at the time. His trial is set to begin on Oct. 19. According to court filings, more than 200 AI messages have been entered into evidence in the case.

Source link

Advertisement
Continue Reading

Tech

Mozilla says it patched 271 Firefox vulnerabilities thanks to Anthropic’s Claude Mythos

Published

on

Anthropic’s buzzy announcement about using AI to improve cybersecurity earlier this month was met with plenty of skepticism. However, Mozilla shared some details that support use of the company’s special Claude Mythos Preview model as a way to protect critical services. Using Mythos helped Mozilla’s team find and patch 271 vulnerabilities in the latest release of the Firefox browser. “So far we’ve found no category or complexity of vulnerability that humans can find that this model can’t,” the foundation said.

The blog post from Mozilla feels like a positive sign for Anthropic’s Project Glasswing. Obviously the AI company would want to put itself in the best possible light while presenting its own initiative, but there’s something encouraging about hearing the benefits from a third party. Mozilla also noted that in its time with Claude Mythos, the AI wasn’t able to turn up any bugs that a human wouldn’t have been able to find, given enough time and resources, which indicates that AI isn’t presently able to do more to crack cybersecurity protections than a person can.

An organizaion successfully using AI for good is certainly a refreshing change of pace in tech news. And for those Firefox users who aren’t personally interested in applying any generative AI in their browsing, Mozilla has given the option to turn it all off for the past several months.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025