Connect with us

Tech

Barkbox Promo Codes and Discounts: Up to 50% Off

Published

on

As my fellow pet parents will know, it’s amazing how quickly even the tiniest of dogs can demolish their toys and treat stash. We love and spoil them nonetheless. When you subscribe to BarkBox a fresh batch of cleverly themed treats and toys arrives at your doorstep. The costs of pet ownership can stack up quickly, especially if you’re buying your pooch a random gift box that goes well beyond the essentials. That’s why we have Barkbox promo codes and discount options ready to go for you.

Barkbox Promo: Enjoy a Free Toy for a Year at Barkbox

When your monthly Barkbox arrives, it’s like Christmas morning for your dogs. I watch as my two dogs, Rosi and Randy, shake their little Chihuahua mix bodies with barely restrained excitement. They’re never gentle on their toys but the stimulation that comes from textures and chewing is good for their little brains. With Barkbox you get a steady supply of two unique toys and two bags of all-natural treats every month. If you want to see how your dogs react, this Barkbox coupon is good for new Barkbox subscription customers and adds an additional toy in your box every month for a year.

Save 50% on Your First Barkbox Food Subscription With a Barkbox Coupon Code

Another reason why Barkbox is the best dog subscription box is how easy the company makes it to keep your pantry stocked with your dog’s food. Use this Barkbox coupon to save 50% off your first Barkbox food subscription, so you won’t have to end up running out to the grocery store in the middle of the night when your scooper scrapes across the bottom of an empty kibble bin.

Fly Travel Stress-Free With Your Dog and Get $300 Off BARK Air Flights

If you live in a Barkbox flight hub destination, please know I am insanely jealous of you. It’s no secret that flying is stressful and can be very dangerous for pets, especially if they have to ride in a cargo hold. Barkbox makes them the VIP with BARK Air, letting them ride in the cabin with you and get doted on, so things are a lot less scary. This is another perk of having a BarkBox subscription, with the opportunity to save $300 off BARK Air Flights.

Advertisement

Support Your Dog’s Dental Health and Get $10 Off With a Barkbox Coupon

Dental health is crucial for dogs, as it can prevent disease not just in their mouths, but their vital organs. Don’t forget to schedule your yearly cleaning with your vet, but in the meantime, use this BarkBox discount code to get $10 off a special BarkBox Dental kit.

Get an Extra Premium Toy in Every BarkBox With the Extra Toy Club

For having such tiny mouths, my dogs can gnaw through toys with surprising speed. If you’re also buried in a pile of shredded fluff and squeakers from disemboweled toys, the Extra Toy Club can help. This subscription includes dog toys for aggressive chewers of all ages, breeds, and sizes, offering extra durable toys meant to last longer. So far, so good at my house. To upgrade to this subscription box, it’s an extra $9 per month.

Get Exclusive BarkBox Discounts: Join the Email List

If you assume that the punchy branding and witty lingo extend to Barkbox’s email subscribers and not just the box subscription, you’d be correct. As a bonus, you can get exclusive BarkBox discount codes when you sign up to receive these emails. Who also doesn’t love a furry face and reminder of their pet in between work subject lines and bill payment reminders, too?

Source link

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

These Official ChromeOS Flex USB Sticks Can Give Your Old Mac or Windows PC a Second Life

Published

on

“People want something that lasts them a long time, that is quality, that is useful,” says Google senior director Alexander Kuscher. “Eventually, when it breaks or when you lose it, you get a new one because you feel taken care of. So I think that builds trust, and the trust is important.”

Flex started as an enterprise service for businesses; Google offered companies worried about security vulnerabilities on aging hardware a way to easily update to a more secure operating system. Or, at least, one that still received updates. After a while, other users started to get ahold of the software, downloading and installing it on their own USB sticks for their personal machines. “We didn’t make it particularly easy at the time,” Kuscher says. “But people did it.”

What led to the more consumer-oriented push of ChromeOS Flex—like this partnership with Back Market—was the end of software support for Microsoft’s Windows 10 operating system last fall. While the OS still technically works, it stopped receiving security updates, and Microsoft has encouraged users to update to Windows 11. But Windows 11 has specific hardware requirements, and it may not be a simple upgrade on certain machines. Google saw this as a moment to provide a cheaper alternative to the “Windows 10 cliff,” as Kuscher puts it. Back Market agreed.

“Ultimately, [Microsoft is] saying that people need to throw away their existing laptop to buy another one,” Hug de Larauze says. “And we say politely, no.”

Advertisement

If you’re tech-savvy, you can forgo Back Market’s $3 stick and download ChromeOS Flex onto a USB drive you have lying around right now.

Buying Refurb

Back Market has done very well for itself despite economic turmoil. As devices become more expensive, people turn to cheaper, refurbished options. He compares the device market to the auto industry.

“Ninety percent of cars are being sold pre-owned,” Hug de Larauze says. “The new normal is to purchase them pre-owned because it’s almost dumb to buy a new one.”

When US president Donald Trump announced sweeping tariffs last year, Hug de Larauze says Back Market sales tripled afterwards. Even after the dust settled a little and it became clear that tariffs would not directly affect smartphones or computers, Hug de Larauze says sales stayed around twice what they’d been before. Back Market made $3.8 billion in 2025, making the company profitable for the first time. While Hug de Larauze says these kinds of economic fluctuations may be good for sending more people to Back Market, he hopes it will shift buyer mindsets to buying refurbished tech writ large.

Advertisement

“We have one planet, and resources are limited,” Hug de Larauze says. “We need to do more with what we already have in every sector. Fashion is the same, transportation is the same, energy is the same, it’s the same for everything.”

Source link

Continue Reading

Tech

Apple’s new Studio Display XDR monitor has limited functionality on older Silicon Macs

Published

on

If you’re looking to pre-order Apple’s new Studio Display XDR monitor today but have an older Mac, beware of some potential issues. According to the compatibility list spotted by Apple Insider, the new display will only work at 60Hz and not at its full 120Hz refresh rate on some older and less powerful Silicon models. Moreover, support for older Intel Macs isn’t mentioned at all for either the Studio Display XDR or cheaper Studio Display.

All Apple Silicon Macs will work with both monitors, including those with the oldest M1 chips, according to the support pages. However, the compatibility list for the Studio Display XDR includes this nugget: “Mac models with M1, M1 Pro, M1 Max, M1 Ultra, M2, and M3 support Studio Display XDR at up to 60Hz. All other Studio Display XDR features are supported.” So even if you have a hotrod M1 Ultra-based Mac, the Studio Display XDR’s refresh rate is capped at 60Hz — despite the fact that the chip can drive third-party monitors at 120Hz.

Similarly, only the iPad Pro M5 supports the Studio Display XDR at 120Hz, with all other compatible models (in the iPad Pro and iPad Air family) limited to 60Hz.

Intel Mac support isn’t mentioned at all in the compatibility list for either display, though they may function in some limited manner when connected. Intel Macs just received their last new OS update with macOS Tahoe (and only three more years of security updates), but it’s still surprising that they’re not compatible with Apple’s latest monitors.

Advertisement

Source link

Continue Reading

Tech

Military Drone Insights for Safer Self-Driving Cars

Published

on

Self-driving cars often struggle with with situations that are commonplace for human drivers. When confronted with construction zones, school buses, power outages, or misbehaving pedestrians, these vehicles often behave unpredictably, leading to crashes or freezing events, causing significant disruption to local traffic and possibly blocking first responders from doing their jobs. Because self-driving cars cannot successfully handle such routine problems, self-driving companies use human babysitters to remotely supervise them and intervene when necessary.

This idea—humans supervising autonomous vehicles from a distance—is not new. The U.S. military has been doing it since the 1980s with unmanned aerial vehicles (UAVs). In those early years, the military experienced numerous accidents due to poorly designed control stations, lack of training, and communication delays.

As a Navy fighter pilot in the 1990s, I was one of the first researchers to examine how to improve the UAV remote supervision interfaces. The thousands of hours I and others have spent working on and observing these systems generated a deep body of knowledge about how to safely manage remote operations. With recent revelations that U.S. commercial self-driving car remote operations are handled by operators in the Philippines, it is clear that self-driving companies have not learned the hard-earned military lessons that would promote safer use of self-driving cars today.

While stationed in the Western Pacific during the Gulf War, I spent a significant amount of time in air operations centers, learning how military strikes were planned, implemented and then replanned when the original plan inevitably fell apart. After obtaining my PhD, I leveraged this experience to begin research on the remote control of UAVs for all three branches of the U.S. military. Sitting shoulder-to-shoulder in tiny trailers with operators flying UAVs in local exercises or from 4000 miles away, my job was to learn about the pain points for the remote operators as well as identify possible improvements as they executed supervisory control over UAVs that might be flying halfway around the world.

Advertisement

Supervisory control refers to situations where humans monitor and support autonomous systems, stepping in when needed. For self-driving cars, this oversight can take several forms. The first is teleoperation, where a human remotely controls the car’s speed and steering from afar. Operators sit at a console with a steering wheel and pedals, similar to a racing simulator. Because this method relies on real-time control, it is extremely sensitive to communication delays.

The second form of supervisory control is remote assistance. Instead of driving the car in real time, a human gives higher-level guidance. For example, an operator might click a path on a map (called laying “breadcrumbs”) to show the car where to go, or interpret information the AI cannot understand, such as hand signals from a construction worker. This method tolerates more delay than teleoperation but is still time-sensitive.

Five Lessons From Military Drone Operations

Over 35 years of UAV operations, the military consistently encountered five major challenges during drone operations which provide valuable lessons for self-driving cars.

Latency

Latency—delays in sending and receiving information due to distance or poor network quality—is the single most important challenge for remote vehicle control. Humans also have their own built-in delay: neuromuscular lag. Even under perfect conditions, people cannot reliably respond to new information in less than 200–500 milliseconds. In remote operations, where communication lag already exists, this makes real-time control even more difficult.

Advertisement

In early drone operations, U.S. Air Force pilots in Las Vegas (the primary U.S. UAV operations center) attempted to take off and land drones in the Middle East using teleoperation. With at least a two-second delay between command and response, the accident rate was 16 times that of fighter jets conducting the same missions . The military switched to local line-of-sight operators and eventually to fully automated takeoffs and landings. When I interviewed the pilots of these UAVs, they all stressed how difficult it was to control the aircraft with significant time lag.

Self-driving car companies typically rely on cellphone networks to deliver commands. These networks are unreliable in cities and prone to delays. This is one reason many companies prefer remote assistance instead of full teleoperation. But even remote assistance can go wrong. In one incident, a Waymo operator instructed a car to turn left when a traffic light appeared yellow in the remote video feed—but the network latency meant that the light had already turned red in the real world. After moving its remote operations center from the U.S. to the Philippines, Waymo’s latency increased even further. It is imperative that control not be so remote, both to resolve the latency issue but also increase oversight for security vulnerabilities.

Workstation Design

Poor interface design has caused many drone accidents. The military learned the hard way that confusing controls, difficult-to-read displays, and unclear autonomy modes can have disastrous consequences. Depending on the specific UAV platform, the FAA attributed between 20% and 100% of Army and Air Force UAV crashes caused by human error through 2004 to poor interface design.

UAV crashes (1986-2004) caused by human factors problems, including poor interface and procedure design. These two categories do not sum to 100% because both factors could be present in an accident.

Advertisement

Human Factors Interface Design Procedure Design
Army Hunter 47% 20% 20%
Army Shadow 21% 80% 40%
Air Force Predator 67% 38% 75%
Air Force Global Hawk 33% 100% 0%

Many UAV aircraft crashes have been caused by poor human control systems. In one case, buttons were placed on the controllers such that it was relatively easy to accidentally shut off the engine instead of firing a missile. This poor design led to the accidents where the remote operators inadvertently shut the engine down instead of launching a missile.

The self-driving industry reveals hints of comparable issues. Some autonomous shuttles use off-the-shelf gaming controllers, which—while inexpensive—were never designed for vehicle control. The off-label use of such controllers can lead to mode confusion, which was a factor in a recent shuttle crash. Significant human-in-the-loop testing is needed to avoid such problems, not only prior to system deployment, but also after major software upgrades.

Operator Workload

Drone missions typically include long periods of surveillance and information gathering, occasionally ending with a missile strike. These missions can sometimes last for days; for example, while the military waits for the person of interest to emerge from a building. As a result, the remote operators experience extreme swings in workload: sometimes overwhelming intensity, sometimes crushing boredom. Both conditions can lead to errors.

When operators teleoperate drones, workload is high and fatigue can quickly set in. But when onboard autonomy handles most of the work, operators can become bored, complacent, and less alert. This pattern is well documented in UAV research.

Advertisement

Self-driving car operators are likely experiencing similar issues for tasks ranging from interpreting confusing signs to helping cars escape dead ends. In simple scenarios, operators may be bored; in emergencies—like driving into a flood zone or responding during a citywide power outage—they can become quickly overwhelmed.

The military has tried for years to have one person supervise many drones at once, because it is far more cost effective. However, cognitive switching costs (regaining awareness of a situation after switching control between drones) result in workload spikes and high stress. That coupled with increasingly complex interfaces and communication delays have made this extremely difficult.

Self-driving car companies likely face the same roadblocks. They will need to model operator workloads and be able to reliably predict what staffing should be and how many vehicles a single person can effectively supervise, especially during emergency operations. If every self-driving car turns out to need a dedicated human to pay close attention, such operations would no longer be cost-effective.

Training

Early drone programs lacked formal training requirements, with training programs designed by pilots, for pilots. Unfortunately, supervising a drone is more akin to air traffic control than actually flying an aircraft, so the military often placed drone operators in critical roles with inadequate preparation. This caused many accidents. Only years later did the military conduct a proper analysis of the knowledge, skills, and abilities needed to conduct safe remote operations, and changed their training program.

Advertisement

Self-driving companies do not publicly share their training standards, and no regulations currently govern the qualifications for remote operators. On-road safety depends heavily on these operators, yet very little is known about how they are selected or taught. If commercial aviation dispatchers are required to have formal training overseen by the FAA, which are very similar to self-driving remote operators, we should hold commercial self-driving companies to similar standards.

Contingency Planning

Aviation has strong protocols for emergencies including predefined procedures for lost communication, backup ground control stations, and highly reliable onboard behaviors when autonomy fails. In the military, drones may fly themselves to safe areas or land autonomously if contact is lost. Systems are designed with cybersecurity threats—like GPS spoofing—in mind.

Self-driving cars appear far less prepared. The 2025 San Francisco power outage left Waymo vehicles frozen in traffic lanes, blocking first responders and creating hazards. These vehicles are supposed to perform “minimum-risk maneuvers” such as pulling to the side—but many of them didn’t. This suggests gaps in contingency planning and basic fail-safe design.

The history of military drone operations offers crucial lessons for the self-driving car industry. Decades of experience show that remote supervision demands extremely low latency, carefully designed control stations, manageable operator workload, rigorous, well-designed training programs, and strong contingency planning.

Advertisement

Self-driving companies appear to be repeating many of the early mistakes made in drone programs. Remote operations are treated as a support feature rather than a mission-critical safety system. But as long as AI struggles with uncertainty, which will be the case for the foreseeable future, remote human supervision will remain essential. The military learned these lessons through painful trial and error, yet the self-driving community appears to be ignoring them. The self-driving industry has the chance—and the responsibility—to learn from our mistakes in combat settings before it harms road users everywhere.

From Your Site Articles

Related Articles Around the Web

Source link

Advertisement
Continue Reading

Tech

Anthropic sees major Claude outage after ‘unprecedented demand’

Published

on

As the US administration proceeds to drop Anthropic as a supplier, many are rallying around the AI company’s relatively ethical stance, creating ‘unprecedented demand’ for Claude.

Anthropic’s Claude has been fast becoming the darling of the AI enthusiasts, for development, research and enterprise work. Now it is facing the might of the US administration which is threatening to drop it entirely as a supplier after a falling out with the Pentagon over so-called “red lines” it would not pass.

With many in Silicon Valley supporting its relatively principled stand, and general users sending it to the top of the US Apple charts in recent days for free downloads – beating OpenAI’s ChatGPT for the first time – its flagship Claude.ai and Claude Code apps went down for around three hours on Monday (2 March), causing many to bemoan its absence. There are already reports of further outages as we write, although its latest update says “a fix has been implemented and we are monitoring the results”.

In a nostalgic post on LinkedIn yesterday, regular contributor to Silicon Republic, AI aficionado Jonathan McCrea wrote: “I now feel the same way about Claude being down as I used to about Twitter being down.”

Advertisement

De facto boycott

Last night, treasury secretary Scott Bessent added his voice to the de facto US administration boycott of Anthropic products saying in a post on X that his department would terminate use of Anthropic products.

It follows a directive from president Donald Trump ordering US agencies to “phase out” their use of the AI company’s products, and his defence department labelling Anthropic a “supply-chain risk”, an allocation normally reserved for foreign suppliers from non-friendly states. Anthropic has been quick to say that this is a “legally unsound’ designation, and is expected to challenge the move in the courts.

Reuters is also reporting that it has seen memos to employees at the Department of Health and Human Services, asking them to switch to other AI platforms such as ChatGPT and Gemini, and at the State Department saying it was switching the model powering its in-house chatbot – StateChat – to OpenAI from Anthropic.

Financially it will surely deal a serious blow to Anthropic in the short term, but some commentators are arguing that it could be a pivotal moment for Anthropic as it may be seen by many as the relatively ethical choice when it comes to the AI giants.

Advertisement

The recent Grok scandal has put a major question mark over xAI’s credentials and OpenAI’s Sam Altman clearly sees the reputational risk as he has been quick to claim that it is ensuring some guardrails in its contract with the Pentagon.

On X yesterday Altman claimed that these guardrails would ensure OpenAI would not be “intentionally used for domestic surveillance of ⁠US persons ​and nationals”.

The backstory

If you haven’t been following, Anthropic drew the ire of the US administration after a standoff with the Pentagon, where Anthropic refused to change its safeguards related to using its AI for fully autonomous weapons, or for mass surveillance of US citizens.

On Thursday (February 27), Anthropic’s Dario Amodei released an official statement saying Anthropic believed that in “a narrow set of cases, we believe AI can undermine, rather than defend, democratic values”.

Advertisement

“Some uses are also simply outside the bounds of what today’s technology can safely and reliably do,” he said. “Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included.

“We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values.”

Amodei went on to say that partially autonomous weapons, like those used today in Ukraine, are vital to the defense of democracy. “But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk.”

It’s a debacle that is likely to roll on in coming days, and it remains to be seen whether Anthropic can withstand the unprecedented onslaught from its own government and rely on the support of users for its principled stand. In the short term, its challenge appears to be to meet the current demand on its systems.

Advertisement

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

iPhone Coruna virus: possible US government hacking toolset spreading via black market

Published

on

If your iPhone is running an outdated version of iOS, you may have 23 vulnerabilities that can be exploited by highly sophisticated toolkit being sold to bad actors.

An iPhone showing green lines of binary on a green background
Update to iOS 26 to avoid a sophisticated hacking toolkit

It is well known that law enforcement agencies and government entities rely on hardware like GrayKey to attempt a bypass of iPhone security. It seems that the United States Government may have created a monstrous exploit tool that is now being sold and spread to bad actors.
A Wired report details data shared by Google’s Threat Intelligence Group and iVerify. Google explains how the exploit toolkit, named “Coruna,” spread, while iVerify shared its findings tying its origins to the US government.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

SoftBank credit outlook hit after betting $30bn more on OpenAI

Published

on

S&P research finds OpenAI to be one of SoftBank’s investments with the ‘weakest’ credit quality.

OpenAI is making SoftBank’s investment portfolio look bad, said financial analyst S&P Global, which lowered the Japanese investment firm’s outlook from ‘stable’ to ‘negative’, with a long-term issuer credit rating of ‘BB+’.

SoftBank is making massive bets on OpenAI, after already investing $30bn into the world’s largest private company as of last year. It is now is gearing to pour another $30bn into OpenAI over the course of the year.

With the new investment, S&P figures that OpenAI will represent 30pc of SoftBank’s investment assets – the same as its investments in Arm. And after the additional investment, SoftBank’s investment portfolio will likely exceed $320bn, making it one of the largest in the world.

Advertisement

However, S&P evaluation found OpenAI to be one of SoftBank’s investments with the “weakest” credit quality. The Japanese firm’s investments in AI majorly involve start-ups and private companies, including SambaNova, Wayve and ABB Robotics, which S&P said exposes SoftBank to “significant AI innovation risk”.

These kinds of investments could weaken SoftBank’s negotiating strength, S&P found, while the additional investment in OpenAI could also worsen the company’s loan-to-value (LTV) ratio.

Last November, SoftBank sold off all of its shares in Nvidia, which came to over $5bn. At the time, company chief financial officer Yoshimitsu Goto reiterated SoftBank’s belief in OpenAI and Arm, commenting: “OpenAI is one of our key growth drivers. Together, Arm and OpenAI are powering SoftBank Group toward our goal of becoming the number one platform provider for the artificial superintelligence era.”

An OpenAI initial public offering would be a well-needed boost for SoftBank’s investment portfolio, according to S&P, which also concluded that SoftBank will need to sell assets and holdings to improve its LTV.

Advertisement

“The negative outlook reflects our view that SoftBank Group’s large follow-on investment in OpenAI means it will take longer than we had assumed for the company to restore the liquidity and quality of its investment assets,” S&P said.

“The company may take measures to ease its financial burden, such as selling assets, but we believe the timing and scale of those measures remain uncertain.”

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Tech

Jolla Sailfish pitches a "European phone" for users wary of Google and Apple

Published

on


Jolla’s return to the smartphone market follows a turbulent decade during which the company nearly collapsed, pivoted to licensing its Sailfish OS platform, severed business ties with Russia after the invasion of Ukraine, and later reorganized under the new corporate structure Jollyboys. The reset produced a device assembled in Salo,…
Read Entire Article
Source link

Continue Reading

Tech

GPT-5.3 Instant cuts hallucinations by 26.8% as OpenAI shifts focus from speed to accuracy

Published

on

OpenAI’s GPT-5.3 Instant — the company’s most widely used model — reduces hallucinations by up to 26.8% compared to its predecessor, prioritizing accuracy and conversational reliability over raw performance gains, OpenAI says.

GPT-5.3 Instant, which is essentially the default and is the most used model for ChatGPT users, also improves on tone, relevance and conversation with fewer refusals. It is available on both ChatGPT and on the API. 

Right now, only the Instant model will be upgraded to 5.3, but the company said it is working on updating the other models under ChatGPT, Thinking, and Pro to 5.3 “soon.” 

GPT-5.3 Instant cuts hallucinations by up to 26.8%

OpenAI ran two internal evaluations: one across higher-stakes domains including medicine, finance, and law; the other drawing on user feedback.

Advertisement

Based on higher-stakes evaluations conducted by the company, GPT-5.3 Instant reduces hallucinations by 26.8% when using the web. It improves reliability by 19.7% when relying on its internal knowledge. User feedback showed a 22.5% decrease in hallucinations when answering queries using web search. 

The company said GPT-5.3 Instant is more reliable because it improved how it balances information from the internet with its own internal training and reasoning. 

“More broadly, GPT-5.3 Instant is less likely to overindex on web results, which previously could lead to long lists of links or loosely connected information. It does a stronger job of recognizing the subtext of questions and surfacing the most important information, especially upfront, resulting in answers that are more relevant and immediately usable, without sacrificing speed or tone,” the company said. 

An example OpenAI gave is when a user asks about the biggest signing in Major League Baseball and its impact. The previous model, GPT-5.2, often defaulted to summarizing search results.

Advertisement

Accuracy overtakes performance as OpenAI’s selling point

With this new release, first on its most used model, OpenAI wants enterprise customers and other ChatGPT users to understand that the battlefront is not just about how performant a model is, but also about how well it can adhere to actual information. Instead of focusing on performance metrics such as speed and token savings, the company is leaning more into GPT-5.3 Instant’s reliability. 

Competitors such as Google and Anthropic also tout greater accuracy in their new models. Anthropic said its new Claude Sonnet 4.6 has fewer hallucinations, while Google was forced to pull its Gemma 3 model after it hallucinated false information about a lawmaker. 

GPT-5.3 Instant dials back refusals and “cringe” tone

“This update focuses on the parts of the ChatGPT experience people feel every day: tone, relevance, and conversational flow. These are nuanced problems that don’t always show up in benchmarks, but shape whether ChatGPT feels helpful or frustrating. GPT-5.3 Instant directly reflects user feedback in these areas,” OpenAI said in a blog post.

GPT-5.3 Instant has a more natural conversation style, moving away from what OpenAI claimed was a “cringe” tone that came across as overbearing and made assumptions about user intent. The company noted that it will ensure the chat platform’s personality is more consistent across updates so users will not experience a tonal shift when conversing with the model.

Advertisement

The new model significantly reduces refusals. OpenAI said the previous model would often refuse to answer questions, even when they did not violate any guardrails. Sometimes, the prior model answers “in ways that feel overly cautious or preachy, particularly around sensitive topics.”

The company promises that GPT-5.3 will not do the same and will tone down “overly defensive or moralizing preambles.” This means the model will answer directly, without caveats, so users do not end conversations without a response to their query. 

Despite this, GPT-5.3 Instant still faces some limitations, especially in some languages like Korean and Japanese, where the answers still sound stilted. 

Safety card shows regressions in sexual content and self-harm categories

The new model does not have support for adult content, according to an OpenAI spokesperson in an email to VentureBeat, as the company is still figuring out “how to maximize user freedom while maintaining our high safety bar.” OpenAI does not have a timeline for when it will release that functionality.

Advertisement

OpenAI conducted safety benchmarking on the new model, noting on its safety card that, while it performed well against disallowed content, it still did not match the level of GPT-5.2 Instant. However, OpenAI noted these results could change after launch.

“GPT-5.3 Instant shows regressions relative to GPT-5.2 Instant and GPT-5.1 Instant for disallowed sexual content, and relative to GPT-5.2 Instant for self-harm on both standard and dynamic evaluations,” the company said.

In other categories, OpenAI said the model performs on par with or better than previous releases, and noted the regressions for graphic violence and violent illicit behavior have low statistical significance.

Expect a new model soon?

After announcing GPT-5.3 Instant and noting that updates for Thinking and Pro will be coming soon, OpenAI teased that even this new model could be retiring.

Advertisement

In a post on X, OpenAI said GPT-5.4 is coming “sooner than you think.”

OpenAI did not elaborate on what changes, if any, we can expect with GPT-5.4 and which modes will get it first. 

GPT-5.2 Instant, the predecessor model, will remain available on the ChatGPT model picker until June 3, when it will be retired.

Source link

Advertisement
Continue Reading

Tech

Facebook accounts unavailable in worldwide outage

Published

on

Facebook

Story update after outage was resolved.

Social media giant Facebook suffered a worldwide outage that prevented users from accessing their accounts.

When visiting the site, users were greeted with a message stating there account is temporarily unavailable.

“Your account is currently unavailable due to a site issue. We expect this to be resolved shortly. Please try again in a few minutes,” reads the outage message.

Advertisement
Facebook outage message stating your account is unavailable
Facebook outage message stating your account is unavailable
Source: BleepingComputer

According to DownDetector, the outage began around 4:15 PM ET and is impacting accounts worldwide.

However, the Meta status page only claims there are “High Disruptions” to the Facebook ad manager, Instagram Boost, and the WhatsApp Business API.

BleepingComputer contacted Facebook with questions about the outage and will update the story if we hear back.

Update 6:21 PM ET: The Facebook outage has now been resolved, with users once again able to access their accounts.

However, Facebook has yet to provide any information as to what caused the outage.

Advertisement

Malware is getting smarter. The Red Report 2026 reveals how new threats use math to detect sandboxes and hide in plain sight.

Download our analysis of 1.1 million malicious samples to uncover the top 10 techniques and see if your security stack is blinded.

Source link

Continue Reading

Tech

Endor Labs launches free tool AURI after study finds only 10% of AI-generated code is secure

Published

on

Endor Labs, the application security startup backed by more than $208 million in venture funding, today launched AURI, a platform that embeds real-time security intelligence directly into the AI coding tools that are reshaping how software gets built. The product is available free to individual developers and integrates natively with popular AI coding assistants including Cursor, Claude, and Augment through the Model Context Protocol (MCP).

The announcement arrives against a sobering backdrop. While 90% of development teams now use AI coding assistants, research published in December by Carnegie Mellon University, Columbia University, and Johns Hopkins University found that leading models produce functionally correct code only about 61% of the time — and just 10% of that output is both functional and secure.

“Even though AI can now produce functionally correct code 61% of the time, only 10% of that output is both functional and secure,” Endor Labs CEO Varun Badhwar told VentureBeat in an exclusive interview. “These coding agents were trained on open source code from across the internet, so they’ve learned best practices — but they’ve also learned to replicate a lot of the same security problems of the past.”

That gap between code that works and code that is safe defines the market AURI is designed to capture — and the urgency behind its launch.

Advertisement

The security crisis hiding inside the AI coding revolution

To understand why Endor Labs built AURI, it helps to understand the structural problem at the heart of AI-assisted software development. AI coding models are trained on vast repositories of open-source code scraped from across the internet — code that includes not only best practices but also well-documented vulnerabilities, insecure patterns, and flaws that may not be discovered for years after the code was originally written.

Badhwar, a repeat cybersecurity entrepreneur who previously built RedLock (acquired by Palo Alto Networks), founded Endor Labs four years ago with Dimitri Stiliadis. The original thesis was straightforward: developers were becoming “software assemblers,” writing less original code and importing most components from open source repositories. Then came the explosion of AI-powered coding tools, which Badhwar described as “the once in a generation opportunity of how to rewrite software development life cycle powered by AI.”

The productivity gains are real — more efficiency, faster time to market, and the democratization of software creation beyond trained engineers. But the security consequences are potentially devastating. New vulnerabilities are discovered every day in code that may have been written a decade ago, and that constantly evolving threat intelligence is not easily available to the AI models generating new code.

“Every day, every hour, new vulnerabilities are found in software that might have been written 5, 10, 12 years ago — and that information isn’t easily available to the models,” Badhwar explained. “If you started filtering out anything that ever had a vulnerability, you’d have no code left to train on.”

Advertisement

The result is a feedback loop: AI tools generate code at unprecedented speed, much of it modeled on insecure patterns, and security teams scramble to keep up. Traditional scanning tools, designed for a world where humans wrote and reviewed code at human speed, are increasingly overmatched.

How AURI traces vulnerabilities through every layer of an application

AURI’s core technical differentiator is what Endor Labs calls its “code context graph” — a deep, function-level map of how an application’s first-party code, open source dependencies, container layers, and AI models interconnect. Where competitors like Snyk and GitHub’s Dependabot examine what libraries an application imports and cross-reference them against known vulnerability databases, Endor Labs traces exactly how and where those components are actually used, down to the individual line of code.

“We have this code intelligence graph that understands not just what libraries and dependencies you use, but pinpoints exactly how, where, and in what context they’re used — down to the specific line of code where you’re calling a piece of functionality that has a vulnerability,” Badhwar said.

He illustrated the difference with a concrete example. A developer might import a large library like an AWS SDK but only call two services comprising 10 lines of code. The remaining 99,000 lines in that open source library are unreachable by the application. Traditional tools flag every known vulnerability across the entire library. AURI’s full-stack reachability analysis trims those irrelevant findings away.

Advertisement

Building that capability required significant investment. Endor Labs hired 13 PhDs specializing in program analysis, many of whom previously built similar technology internally at companies like Meta, GitHub, and Microsoft. The company has indexed billions of functions across millions of open source packages and created over half a billion embeddings to identify the provenance of copied code, even when function names or structures have been changed.

The platform combines this deterministic analysis with agentic AI reasoning. Specialized agents work together to detect, triage, and remediate vulnerabilities automatically, while multi-file call graphs and dataflow analysis detect complex business logic flaws that span multiple components. The result, according to Endor Labs, is an average 80% to 95% reduction in security findings for enterprise customers — trimming away what Badhwar called “tens of millions of dollars a year in developer productivity” lost to investigating false positives.

A free tier for developers, a paid platform for the enterprise

In a strategic move aimed at rapid adoption, Endor Labs is offering AURI’s core functionality free to individual developers through an MCP server that integrates directly with popular IDEs including VS Code, Cursor, and Windsurf. The free tier requires no credit card, no sign-up process, and no complex registration.

“The idea is that there’s no policy, no administration, no customization. It just helps your code generation tools stop creating more vulnerabilities,” Badhwar said.

Advertisement

Privacy-conscious developers will note a key architectural choice: the free product runs entirely on the developer’s machine. Only non-proprietary vulnerability intelligence is pulled from Endor Labs’ servers. “All of your code stays local and is scanned locally. It never gets copied into AURI or Endor Labs or anything else,” Badhwar explained.

The enterprise version adds the features large organizations need: full customization, policy configuration, role-based access control for teams of thousands of developers, and integration across CI/CD pipelines. Enterprise pricing is based on the number of developers and the volume of scans. Deployment options include local scanning, ephemeral cloud containers, and on-premises Kubernetes clusters with full tenant isolation — flexibility Badhwar said is “the most any vendor offers in this space.”

The freemium approach mirrors the playbook that worked for developer tools companies like GitHub and Atlassian: win individual developers first, then expand into their organizations. But it also reflects a practical reality. In a world where AI coding agents are proliferating across every team, Endor Labs needs to be wherever code is being written — not waiting behind a procurement process.

“Over 97% of vulnerabilities flagged by our previous tool weren’t reachable in our application,” said Travis McPeak, Security at Cursor, in a statement sent to VentureBeat. “AURI by Endor Labs shows the few vulnerabilities that are impactful, so we patch quickly, focusing on what matters.”

Advertisement

Why Endor Labs says independence from AI coding tools is essential

The application security market is increasingly crowded. Snyk, GitHub Advanced Security, and a growing number of startups all compete for developer attention. Even the AI model providers themselves are entering the fray: Anthropic recently announced a code security product built into Claude, a move that sent ripples through the market.

Badhwar, however, framed Anthropic’s announcement as validation rather than threat. “That’s one of the biggest validations of what we do, because it says code security is one of the hottest problems in the market,” he told VentureBeat. The deeper question, he argued, is whether enterprises want to trust the same tool generating code to also review it.

“Claude is not going to be the only tool you use for agentic coding. Are you going to use a separate security product for Cursor, a separate one for Claude, a separate one for Augment, and another for Gemini Code Assist?” Badhwar said. “Do you want to trust the same tool that’s creating the software to also review it? There’s a reason we’ve always had reviewers who are different from the developers.”

He outlined three principles he believes will define effective security in the agentic era: independence (security review must be separate from the tool that generated the code), reproducibility (findings must be consistent, not probabilistic), and verifiability (every finding must be backed by evidence). It is a direct challenge to purely LLM-based approaches, which Badhwar characterized as “completely non-deterministic tools that you have no control over in terms of having verifiability of findings, consistency.”

Advertisement

AURI’s approach combines LLMs for what they do best — reasoning, explanation, and contextualization — with deterministic tools that provide the consistency enterprises require. Beyond detection, the platform simulates upgrade paths and tells developers which remediation route will work without introducing breaking changes, a step beyond what most competitors offer. Developers can then execute those fixes themselves or route them to AI coding agents with confidence that the changes have been deterministically validated.

Real-world results show AURI can already find zero-day vulnerabilities

Endor Labs has already demonstrated AURI’s capabilities in high-profile scenarios. In February 2026, the company announced that AURI had identified and validated seven security vulnerabilities in OpenClaw, the popular agentic AI assistant, which were later acknowledged by the OpenClaw development team. As reported by Infosecurity Magazine, OpenClaw subsequently patched six of the vulnerabilities, which ranged from high-severity server-side request forgery bugs to path traversal and authentication bypass flaws.

“These are zero days. They’ve never been found, but AURI did an incredible job of finding those,” Badhwar said. The company has also been detecting active malware campaigns in ecosystems like NPM, including tracking campaigns like Shai-Hulud for several months.

The company is well-capitalized to sustain its push. Endor Labs closed an oversubscribed $93 million Series B round in April 2025 led by DFJ Growth, with participation from Salesforce Ventures, Lightspeed Venture Partners, Coatue, Dell Technologies Capital, Section 32, and Citi Ventures. The company reported 30x annual recurring revenue growth and 166% net revenue retention since its Series A just 18 months earlier. Its platform now protects more than 5 million applications and runs over 1 million scans each week for customers including OpenAI, Cursor, Dropbox, Atlassian, Snowflake, and Robinhood.

Advertisement

Several dozen enterprise customers already use Endor Labs to accelerate compliance with frameworks including FedRAMP, NIST standards, and the European Cyber Resilience Act — a growing priority as regulators increasingly treat software supply chain security as a matter of national security.

The bet that security can keep pace with autonomous software agents

The broader question hanging over AURI’s launch — and over the application security industry as a whole — is whether security tooling can evolve fast enough to match the pace of AI-driven development. Critics of agentic security warn that the industry is moving too quickly, granting AI agents permissions across critical systems without fully understanding the risks. Badhwar acknowledged the concern but argued that resistance is futile.

“I’ve seen this play out when I was building cloud security products, and people were fearful of moving to AWS,” he said. “There was a perception of control when it was in your data center. Yet, guess what? That was the biggest movement of its time, and we as an industry built the right technology and security tooling and visibility around it to make ourselves comfortable.”

For Badhwar, the most exciting implication of agentic development is not the new risks it creates but the old problems it can finally solve. Security teams have spent decades struggling to get developers to prioritize fixing vulnerabilities over building features. AI agents, he argued, do not have that problem — if you give them the right instructions and the right intelligence, they simply execute.

Advertisement

“Security has always struggled for lack of a developer’s attention,” Badhwar said. “But we think you can get an AI agent that’s writing software’s attention by giving them the right context, integrating into the right workflows, and just having them do the right thing for you, so you don’t take an automation opportunity and make it a human’s problem.”

It is a characteristically optimistic framing from a founder who has built his career at the intersection of tectonic technology shifts and the security gaps they leave behind. Whether AURI can deliver on that vision at the scale the AI coding revolution demands remains to be seen. But in a world where machines are writing code faster than humans can review it, the alternative — hoping the models get security right on their own — is a bet few enterprises can afford to make.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025