Connect with us

Tech

Trump Administration Using Gross Video Game Footage To Cheerlead Its War Efforts

Published

on

from the war-is-not-a-game dept

We should all know by now that this iteration of the Trump administration absolutely loves using pop culture imagery, including that of video games, to help message its horrible policies. Want to gloat about ICE terrorizing American cities and generally pissing everyone off when they’re not too busy perforating innocents? Let’s use images from Pokémon and Halo! Want to celebrate the destruction of American health thanks to RFK Jr. being in charge of it? Time to whip together a Stardew Valley meme!

It’s gross, of course. Wrapping these pop culture images around fascism, particularly where real deaths have been a result, is nauseating.

But if you want to make this absolutely as disgusting as possible, you need only to use video game footage to gloat about the body count America is racking up in its war/non-war with Iran.

On March 4, the official White House Twitter account posted a roughly one-minute-long video featuring numerous clips of real military strikes against different Iranian locations and targets. At the very start of the video is a clip from 2023’s Modern Warfare III that shows a player activating an MGB killstreak. This is a hidden killstreak for players who get 30 kills without dying. Once called, the bomb ends the match. The official video was posted with the caption: “Courtesy of the Red, White & Blue.”

This is disgusting. Using video game footage to gloat about the Iranian body count is simply sick. Set aside what you think about this war. Set aside whether you think this administration has any fucking clue what it is doing and what will come next once it’s done dropping its bombs. Set aside the open question of what our goals actually are here, whether we’re going to see American troops on the ground in Iran, or whether this will end up as another American quagmire in the Middle East. None of that is the point here.

This isn’t a fucking game. It’s war, no matter how hard the president and the Republicans in Congress want to pretend otherwise so that they don’t have to do their damned jobs. War is a very serious matter, a sentence that never should need to be written in the first place. Eschewing that level of seriousness by treating this like it’s some kind of a video game and we’re all just trying to earn trophies and badges for our kill counts is fucking sick.

Advertisement

Particularly when you consider that this gloating includes over 1200 Iranian deaths, including 175 schoolgirls that committed the crime of trying to learn.

IRAN: At least 1,230 people killed, including 175 schoolgirls and staff killed ​in a missile strike on a primary school in Minab in ⁠the country’s south on the war’s first day, according to the non-profit humanitarian ​group Iranian Red Crescent Society. It was unclear if the overall death toll included ​Islamic Revolutionary Guard Corps military casualties.

Here’s an image of the mass graves Iran says it dug in order to put all of those children to their final rest.

I wonder, are those girls included in the body count to get the White House its Xbox achievement?

War is not a game. Treating it like a video game shows that these are deeply unserious people that are not only running our government, but currently prosecuting a war that they don’t want to call a war. The naked cruelty of it all, rather than treating the enemy and, more importantly, the American people with respect, is horrifying.

That they’re doing it in our name, all the more so.

Advertisement

Filed Under: donald trump, iran, modern warfare, social media, trump administration, video games

Source link

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Anthropic sues the US government over its Pentagon blacklist

Published

on

The AI company filed two federal lawsuits on Monday, arguing the Trump administration’s ‘supply chain risk’ designation is unconstitutional retaliation for protected speech.

There is a phrase in Anthropic’s court filing that sets the tone for everything that follows: “Anthropic turns to the judiciary as a last resort to vindicate its rights and halt the Executive’s unlawful campaign of retaliation.” It is the language of a company that believes it is not simply fighting a contract dispute, but a constitutional one.

On Monday, the San Francisco-based AI company filed two federal lawsuits against the Trump administration, targeting the Pentagon’s decision last week to formally designate Anthropic a “supply chain risk to national security”, a label that has historically been reserved for companies tied to foreign adversaries such as China and Russia.

It is believed to be the first time the designation has been applied to an American company.

Advertisement

The first lawsuit was filed in the US District Court for the Northern District of California. It asks a judge to vacate the designation and grant an immediate stay while the case proceeds. A second, shorter suit was filed in the US Court of Appeals for the District of Columbia Circuit, targeting a separate statute the government invoked that can only be challenged in that jurisdiction.

Both cases make substantially the same argument: that the administration acted unlawfully, without proper statutory authority, and in violation of Anthropic’s First Amendment rights.

More than a dozen federal agencies are named as defendants, including the Department of Defence, the Treasury, the State Department, and the General Services Administration.

The legal action is the culmination of a two-week standoff that escalated with unusual speed into one of the more remarkable confrontations between a technology company and the US government in recent memory.

The dispute centres on two conditions Anthropic has insisted on in its contracts with the Pentagon: that its Claude AI system not be used for mass domestic surveillance of American citizens, and that it not be used to power fully autonomous weapons, systems capable of targeting and firing without human authorisation.

Advertisement

The Pentagon, which has been using Claude on classified networks since the company became the first AI lab to achieve that clearance, demanded that any renewed contract drop these restrictions and grant the military use of Claude for “all lawful purposes.” Anthropic refused.

What followed was a sequence of events that proceeded with striking speed. On 27 February, President Trump posted on Truth Social calling Anthropic a “radical left, woke company” and directing every federal agency to “immediately cease” all use of its technology.

Within hours, Defence Secretary Pete Hegseth announced on X that he was designating Anthropic a supply chain risk, meaning no contractor, supplier, or partner doing business with the US military could conduct any commercial activity with the company. The formal letter confirming the designation arrived on 3 March, five days after the deadline Anthropic had been given to agree to the Pentagon’s terms.

The practical scope of the designation turned out to be narrower than Hegseth’s initial announcement implied. Anthropic CEO Dario Amodei said in a statement last Thursday that the relevant statute limits the designation’s reach to the direct use of Claude in Pentagon contracts, it cannot, Amodei argued, be used to sever all commercial relationships between defence contractors and the company. 

Advertisement

Microsoft, Google, and Amazon all reviewed the designation and reached the same conclusion, issuing statements confirming that Claude would remain available to their customers for work unrelated to defence contracts. Hegseth had explicitly said the opposite in his original post.

The economic stakes are nonetheless substantial. In declarations accompanying Monday’s filings, Anthropic executives laid out the damage in granular terms. Chief Financial Officer Krishna Rao warned the court that if the designation were allowed to stand and customers took a broad reading of its scope, it could reduce Anthropic’s 2026 revenue by “multiple billions of dollars”, an impact he described as “almost impossible to reverse.”

Chief Commercial Officer Paul Smith cited a specific example: one partner with a multi-million-dollar annual contract had already switched to a rival AI model, eliminating an anticipated revenue pipeline of more than $100 million; negotiations with financial institutions worth roughly $180 million combined had also been disrupted.

The complaint itself makes two distinct legal arguments. The first is a First Amendment claim: that the administration’s actions punish Anthropic for its public advocacy around AI safety, its position on autonomous weapons and domestic surveillance, which constitutes protected speech.

Advertisement

“The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech,” the filing states. The second argument challenges the statutory basis of the designation, invoking 10 USC 3252, the procurement law the Pentagon relied upon. Anthropic argues the statute requires the government to use “the least restrictive means” to protect the supply chain, not deploy it as a punitive instrument against a domestic company over a policy disagreement.

The Pentagon’s position is that the dispute is fundamentally about operational control rather than speech. Pentagon officials have argued that a private contractor cannot insert itself into the chain of command by restricting the lawful use of a critical capability, and that the military must retain full discretion over how it deploys technology in national security scenarios.

In an indication that the designation was not straightforwardly about security, a Pentagon official was quoted in Anthropic’s court filing as saying the government intended to “make sure they pay a price” for the company’s refusal, language Anthropic’s lawyers have flagged as evidence of improper motivation.

The case has drawn an unusual show of solidarity from Anthropic’s direct competitors. A group of 37 researchers and engineers from OpenAI and Google DeepMind, including Google’s chief scientist Jeff Dean, who signed in a personal capacity, filed an amicus brief on Monday supporting the lawsuit. 

Advertisement

The brief argues that the designation “chills professional debate” about AI risks and undermines American competitiveness. “By silencing one lab,” the researchers wrote, “the government reduces the industry’s potential to innovate solutions.” The filing is notable given that OpenAI struck a new deal with the Pentagon within hours of the Trump administration’s order,  a move that drew sharp criticism from OpenAI employees and that Altman later acknowledged looked “sloppy and opportunistic.”

Legal observers have been sceptical that the designation will survive judicial scrutiny. Paul Scharre, a former Army Ranger and now executive vice president of the Center for a New American Security, told Breaking Defense that Hegseth’s initial characterisation of the ban simply exceeded what the supply chain risk statute permits,  and that even the narrower formal designation would likely struggle in court, given the law’s requirement for the least restrictive means. Procurement laws passed by Congress, Anthropic argues in its filings, do not give the Pentagon or the president authority to blacklist a company over a policy disagreement.

A first hearing could take place in San Francisco as early as this Friday, according to reports. Anthropic has asked for a temporary order that would allow it to continue working with military contractors while the legal case unfolds. The DoD said it does not comment on litigation.

Among the contradictions the complaint highlights: the military reportedly continued to use Claude during active combat operations in Iran, after the ban had been announced. A six-month phaseout was also ordered simultaneously with an immediate prohibition. And the company retains active FedRAMP authorisation and facility and personnel security clearances that would ordinarily be incompatible with a national security risk finding. None of these inconsistencies have been publicly addressed by the government.

Advertisement

Whatever the court decides, the case has already set a precedent of a different kind: a major AI company, backed by researchers at its own rivals, publicly litigating the government’s right to weaponise procurement law against a domestic company for taking a public stance on how its technology should and should not be used. The outcome could determine, as Anthropic’s complaint puts it, whether any American company can “negotiate with the government” without risking its existence.

Source link

Advertisement
Continue Reading

Tech

There Are No LEDs Around The Face Of This Clock

Published

on

This unusual clock by [Moritz v. Sivers] looks like a holographic dial surrounded by an LED ring, but that turns out to not be the case. What appears to be a ring of LEDs is in fact a second hologram. There are LEDs but they are tucked out of the way, and not directly visible. The result is a very unusual clock that really isn’t what it appears to be.

The face of the clock is a reflection hologram of a numbered spiral that serves as a dial. A single LED – the only one visibly mounted – illuminates this hologram from the front in order to produce the sort of holographic image most of us are familiar with, creating a sense of depth.

The lights around the circumference are another matter. What looks like a ring of LEDs serving as clock hands is actually a transmission hologram made of sixty separate exposures. By illuminating this hologram at just the right angle with LEDs (which are mounted behind the visible area), it is possible to selectively address each of those sixty exposures. The result is something that really looks like there are lit LEDs where there are in fact none.

[Moritz] actually made two clocks in this fashion. The larger green one shown here, and a smaller red version which makes some of the operating principles a bit more obvious on account of its simpler construction.

Advertisement

If it all sounds a bit wild or you would like to see it in action, check out the video (embedded below) which not only showcases the entire operation and assembly but also demonstrates the depth of planning and careful execution that goes into multi-exposure of a holographic plate.

[Moritz v. Sivers] is no stranger to making unusual clocks. In fact, this analog holographic clock is a direct successor to his holographic 7-segment display clock. And don’t miss the caustic clock, nor his lenticular clock.

Advertisement

Source link

Continue Reading

Tech

U.S. broadband households pay for networks while high-traffic streaming and AI platforms contribute almost nothing to infrastructure costs

Published

on


  • US households contribute monthly fees while platforms still impose substantial network infrastructure burdens
  • Broadband cost recovery does not reflect actual traffic or usage patterns
  • Heavy users in the electricity and airline sectors pay proportionally for demand

Broadband networks in the United States operate under a cost model that does not align with actual usage – as households generate substantial revenue for major internet platforms while also contributing to the Universal Service Fund, which supports rural connectivity, schools, libraries, and healthcare facilities.

A typical US broadband household contributes roughly $9 per month to this fund, yet the largest traffic generators impose substantial infrastructure burdens without proportional contributions.

Source link

Advertisement
Continue Reading

Tech

Nvidia is leveling up game visuals with the new DLSS 4.5 update

Published

on

Nvidia has just announced DLSS 4.5. The update brings new AI-powered graphics technology, and the improvements offer a noticeable impact on how modern PC games look and perform. It was revealed alongside other RTX announcements during GDC 2026, focusing on boosting both visual quality and frame rates in demanding titles.

DLSS (Deep Learning Super Sampling) has become a big part of Nvidia’s gaming ecosystem. It uses AI models running on RTX GPUs to reconstruct higher-resolution images and generate additional frames that allow games to run more smoothly without sacrificing visual fidelity. With DLSS 4.5, the technology is getting even better.

Smarter frame generation with DLSS 4.5

One of the notable new additions in DLSS 4.5 is Dynamic Multi Frame Generation, which automatically adjusts how many AI-generated frames are created during gameplay. Rather than sticking to a fixed multiplier, the system dynamically tweaks frame generation in real time to hit the target refresh rate. This approach lets compatible GPUs maintain smoother performance during demanding scenes while avoiding unnecessary frame generation when workloads drop.

DLSS 4.5 also introduces the 6X Multi Frame Generation, which can generate up to five additional frames for every traditionally rendered frame. The result is significantly smoother gameplay, particularly in high-fidelity titles that use advanced rendering techniques like path tracing.

AI upgrades for sharper visuals

Aside from the performance improvements, DLSS 4.5 also upgrades Nvidia’s Super Resolution technology using a second-generation transformer AI model. It is designed to improve image clarity by reducing artifacts such as ghosting, shimmering, and jagged edges in motion-heavy scenes.

Coming to RTX 50 series GPUs soon

Nvidia has confirmed DLSS 4.5 features like Dynamic Multi Frame Generation and the 6X mode will roll out starting March 31 through the Nvidia app. It will first debut in GeForce RTX 50-series GPUs, and will be supported in around 20 games like 007 First Light and Control Resonant.

Advertisement

Source link

Continue Reading

Tech

Anthropic and OpenAI just exposed SAST’s structural blind spot with free tools

Published

on

OpenAI launched Codex Security on March 6, entering the application security market that Anthropic had disrupted 14 days earlier with Claude Code Security. Both scanners use LLM reasoning instead of pattern matching. Both proved that traditional static application security testing (SAST) tools are structurally blind to entire vulnerability classes. The enterprise security stack is caught in the middle.

Anthropic and OpenAI independently released reasoning-based vulnerability scanners, and both found bug classes that pattern-matching SAST was never designed to detect. The competitive pressure between two labs with a combined private-market valuation exceeding $1.1 trillion means detection quality will improve faster than any single vendor can deliver alone.

Neither Claude Code Security nor Codex Security replaces your existing stack. Both tools change procurement math permanently. Right now, both are free to enterprise customers. The head-to-head comparison and seven actions below are what you need before the board of directors asks which scanner you are piloting and why.

How Anthropic and OpenAI reached the same conclusion from different architectures

Anthropic published its zero-day research on February 5 alongside the release of Claude Opus 4.6. Anthropic said Claude Opus 4.6 found more than 500 previously unknown high-severity vulnerabilities in production open-source codebases that had survived decades of expert review and millions of hours of fuzzing.

Advertisement

In the CGIF library, Claude discovered a heap buffer overflow by reasoning about the LZW compression algorithm, a flaw that coverage-guided fuzzing could not catch even with 100% code coverage. Anthropic shipped Claude Code Security as a limited research preview on February 20, available to Enterprise and Team customers, with free expedited access for open-source maintainers. Gabby Curtis, Anthropic’s communications lead, told VentureBeat in an exclusive interview that Anthropic built Claude Code Security to make defensive capabilities more widely available.

OpenAI’s numbers come from a different architecture and a wider scanning surface. Codex Security evolved from Aardvark, an internal tool powered by GPT-5 that entered private beta in 2025. During the Codex Security beta period, OpenAI’s agent scanned more than 1.2 million commits across external repositories, surfacing what OpenAI said were 792 critical findings and 10,561 high-severity findings. OpenAI reported vulnerabilities in OpenSSH, GnuTLS, GOGS, Thorium, libssh, PHP, and Chromium, resulting in 14 assigned CVEs. Codex Security’s false positive rates fell more than 50% across all repositories during beta, according to OpenAI. Over-reported severity dropped more than 90%.

Checkmarx Zero researchers demonstrated that moderately complicated vulnerabilities sometimes escaped Claude Code Security’s detection. Developers could trick the agent into ignoring vulnerable code. In a full production-grade codebase scan, Checkmarx Zero found that Claude identified eight vulnerabilities, but only two were true positives. If moderately complex obfuscation defeats the scanner, the detection ceiling is lower than the headline numbers suggest. Neither Anthropic nor OpenAI has submitted detection claims to an independent third-party audit. Security leaders should treat the reported numbers as indicative, not audited.

Merritt Baer, CSO at Enkrypt AI and former Deputy CISO at AWS, told VentureBeat that the competitive scanner race compresses the window for everyone. Baer advised security teams to prioritize patches based on exploitability in their runtime context rather than CVSS scores alone, shorten the window between discovery, triage, and patch, and maintain software bill of materials visibility so they know instantly where a vulnerable component runs.

Advertisement

Different methods, almost no overlap in the codebases they scanned, yet the same conclusion. Pattern-matching SAST has a ceiling, and LLM reasoning extends detection past it. When two competing labs distribute that capability at the same time, the dual-use math gets uncomfortable. Any financial institution or fintech running a commercial codebase should assume that if Claude Code Security and Codex Security can find these bugs, adversaries with API access can find them, too.

Baer put it bluntly: open-source vulnerabilities surfaced by reasoning models should be treated closer to zero-day class discoveries, not backlog items. The window between discovery and exploitation just compressed, and most vulnerability management programs are still triaging on CVSS alone.

What the vendor responses prove

Snyk, the developer security platform used by engineering teams to find and fix vulnerabilities in code and open-source dependencies, acknowledged the technical breakthrough but argued that finding vulnerabilities has never been the hard part. Fixing them at scale, across hundreds of repositories, without breaking anything. That is the bottleneck. Snyk pointed to research showing AI-generated code is 2.74 times more likely to introduce security vulnerabilities compared to human-written code, according to Veracode’s 2025 GenAI Code Security Report. The same models finding hundreds of zero-days also introduce new vulnerability classes when they write code.

Cycode CTO Ronen Slavin wrote that Claude Code Security represents a genuine technical advancement in static analysis, but that AI models are probabilistic by nature. Slavin argued that security teams need consistent, reproducible, audit-grade results, and that a scanning capability embedded in an IDE is useful but does not constitute infrastructure. Slavin’s position: SAST is one discipline within a much broader scope, and free scanning does not displace platforms that handle governance, pipeline integrity, and runtime behavior at enterprise scale.

Advertisement

“If code reasoning scanners from major AI labs are effectively free to enterprise customers, then static code scanning commoditizes overnight,” Baer told VentureBeat. Over the next 12 months, Baer expects the budget to move toward three areas.

  1. Runtime and exploitability layers, including runtime protection and attack path analysis.

  2. AI governance and model security, including guardrails, prompt injection defenses, and agent oversight.

  3. Remediation automation. “The net effect is that AppSec spending probably doesn’t shrink, but the center of gravity shifts away from traditional SAST licenses and toward tooling that shortens remediation cycles,” Baer said.

Seven things to do before your next board meeting

  1. Run both scanners against a representative codebase subset. Compare Claude Code Security and Codex Security findings against your existing SAST output. Start with a single representative repository, not your entire codebase. Both tools are in research preview with access constraints that make full-estate scanning premature. The delta is your blind spot inventory.

  2. Build the governance framework before the pilot, not after. Baer told VentureBeat to treat either tool like a new data processor for the crown jewels, which is your source code. Baer’s governance model includes a formal data-processing agreement with clear statements on training exclusion, data retention, and subprocessor use, a segmented submission pipeline so only the repos you intend to scan are transmitted, and an internal classification policy that distinguishes code that can leave your boundary from code that cannot. In interviews with more than 40 CISOs, VentureBeat found that formal governance frameworks for reasoning-based scanning tools barely exist yet. Baer flagged derived IP as the blind spot most teams have not addressed. Can model providers retain embeddings or reasoning traces, and are those artifacts considered your intellectual property? The other gap is data residency for code, which historically was not regulated like customer data but increasingly falls under export control and national security review.

  3. Map what neither tool covers. Software composition analysis. Container scanning. Infrastructure-as-code. DAST. Runtime detection and response. Claude Code Security and Codex Security operate at the code-reasoning layer. Your existing stack handles everything else. That stack’s pricing power is what shifted.

  4. Quantify the dual-use exposure. Every zero-day Anthropic and OpenAI surfaced lives in an open-source project that enterprise applications depend on. Both labs are disclosing and patching responsibly, but the window between their discovery and your adoption of those patches is exactly where attackers operate. AI security startup AISLE independently discovered all 12 zero-day vulnerabilities in OpenSSL’s January 2026 security patch, including a stack buffer overflow (CVE-2025-15467) that is potentially remotely exploitable without valid key material. Fuzzers ran against OpenSSL for years and missed every one. Assume adversaries are running the same models against the same codebases.

  5. Prepare the board comparison before they ask. Claude Code Security reasons about code contextually, traces data flows, and uses multi-stage self-verification. Codex Security builds a project-specific threat model before scanning and validates findings in sandboxed environments. Each tool is in research preview and requires human approval before any patch is applied. The board needs side-by-side analysis, not a single-vendor pitch. When the conversation turns to why your existing suite missed what Anthropic found, Baer offered framing that works at the board level. Pattern-matching SAST solved a different generation of problems, Baer told VentureBeat. It was designed to detect known anti-patterns. That capability still matters and still reduces risk. But reasoning models can evaluate multi-file logic, state transitions, and developer intent, which is where many modern bugs live. Baer’s board-ready summary: “We bought the right tools for the threats of the last decade; the technology just advanced.”

  6. Track the competitive cycle. Both companies are heading toward IPOs, and enterprise security wins drive the growth narrative. When one scanner misses a blind spot, it lands on the other lab’s feature roadmap within weeks. Both labs ship model updates on monthly cycles. That cadence will outrun any single vendor’s release calendar. Baer said that running both is the right move: “Different models reason differently, and the delta between them can reveal bugs neither tool alone would consistently catch. In the short term, using both isn’t redundancy. It’s defense through diversity of reasoning systems.”

  7. Set a 30-day pilot window. Before February 20, this test did not exist. Run Claude Code Security and Codex Security against the same codebase and let the delta drive the procurement conversation with empirical data instead of vendor marketing. Thirty days gives you that data.

Fourteen days separated Anthropic and OpenAI. The gap between the next releases will be shorter. Attackers are watching the same calendar.

Source link

Advertisement
Continue Reading

Tech

‘The cloud threat landscape is rapidly shifting’: Google research warns hackers are targeting third parties and software flaws to gain entry

Published

on


  • Google report shows attackers shifting to software flaws over weak credentials
  • Vulnerabilities now account for 44.5% of cloud breaches, exploited within days
  • Third-party SaaS integrations increasingly abused for data theft and access

To break into cloud environments, cybercriminals are relying less on weak credentials and more on third-party software vulnerabilities, new research from Google has found.

The Cloud Threat Horizons Report claims early in 2025 most compromises still relied on weak, or missing credentials. However, in the second half of the year, attackers increasingly started exploiting vulnerabilities in externally managed software.

Advertisement

Source link

Continue Reading

Tech

10 Driving Rules Other Countries Have To Follow That Don’t Exist In The US

Published

on





When you get behind the wheel in the United States, you’re accustomed to a specific set of traffic laws that have become second nature. For the most part, U.S. traffic laws leave personal responsibility for vehicle maintenance and common-sense etiquette as matters of civil politeness or local ordinance, rather than serious, federally backed violations. This familiar framework, however, is a rarity when traveling internationally, where legal systems take over.

While Americans might scoff at the idea of a ticket for an unwashed car, or a potential jail sentence for riding sober next to a drunk friend, these are the real, punitive stakes involved in driving overseas. From Europe’s ancient city centers to Asia’s strict public decency standards, and the Middle East’s emphasis on immaculate urban aesthetics, drivers around the globe are held to a much higher and more diverse set of legal obligations than just keeping their eyes on the road.

Advertisement

The list below goes over unique international driving laws that American motorists rarely, if ever, encounter at home. What is merely considered good etiquette in the U.S. is often a non-negotiable legal requirement in other countries. If you’re planning to visit any, keep in mind that ignorance of the law is rarely a protection against steep fines, vehicle impoundment, or even criminal charges.

Advertisement

Driving shirtless in Thailand

When you picture a road trip through Thailand’s tropical landscapes, the heat and high humidity might immediately make you want to strip down to your most comfortable, minimal clothing. Hopping into a rental car or onto a scooter while wearing only a swimsuit or going bare-chested to beat the intense temperatures might seem tempting, but it could land you on the wrong side of local law enforcement. This isn’t as wild as many of the strangest US driving laws, but it is still strange.

The Motor Vehicle Act says that all motorists (in cars and on motorcycles) must wear a shirt while operating a vehicle. This applies to both locals and international visitors alike, meaning that regardless of how high the temperature climbs, keeping your torso covered is an absolute legal requirement when you’re behind the wheel or operating a bike on public roads.

These aren’t just in traffic rules; they affect other parts of the country’s legal system. So while traffic laws specifically require a shirt, this idea aligns with general public decency rules. The Thai Criminal Code talks about the act in its misdemeanors section. Section 388 strictly bans public obscenity. It says if you do something disgraceful in public, like undressing yourself, showing your exposed self, or any other obscene act, you could receive a fine of up to five hundred baht.

Advertisement

Splashing pedestrians in the United Kingdom

When you’re driving through a heavy downpour in the United States, you might occasionally, and sometimes carelessly, hit a large puddle near the curb, sending a wave of murky rainwater crashing over an unsuspecting pedestrian on the sidewalk. While this scenario is undoubtedly rude, incredibly frustrating for the victim, and generally frowned upon by society, splashing a pedestrian with a puddle isn’t a specific traffic violation in the US.

Across the pond in the United Kingdom, the law takes a much stricter and more punitive approach to this same situation. If you’re planning a road trip through Britain, you must be acutely aware of how you navigate wet roads, because the country has codified this specific breach of etiquette into its legal framework. Under the regulations set forth in the UK, this act is a highly punishable offense.

Advertisement

Specifically, Section 3 of the Road Traffic Act 1988 classifies driving through a puddle to splash a pedestrian as ‘driving without reasonable consideration,’ resulting in fines or points. According to the detailed provisions in the legislation, a motorist is formally regarded in violation of this rule “only if those persons are inconvenienced by their driving.” Being drenched by a tidal wave of dirty street water is clearly recognized as a significant inconvenience under this definition.

Advertisement

Drunk passengers sitting in the front seat in North Macedonia

The idea of a designated driver is a pretty big part of what happens when you go out with friends. When it’s time to head home after having some alcoholic drinks, it is common for drunk friends to just hop in the car, often with one of the most inebriated individuals claiming the front passenger seat. Legally in the US, a drunk passenger can totally ride up front right next to the sober driver, and even drink in some states.

If you happen to be enjoying a night out while traveling in North Macedonia, you’ll need to totally rethink how your group sits before anyone gets into a car. The country’s national traffic safety laws are much stricter and focus more on preventing issues when it comes to where intoxicated people sit inside a moving vehicle. Under its legal framework, anyone who looks like they’re under the influence of alcohol is strictly and legally not allowed to sit in the front passenger seat.

This strict rule is intended to prevent accidents and keep roads safe. A drunk person can be pretty unpredictable, and having them right by the steering wheel, the emergency brake, or even the driver themselves creates an extra risk of sudden distraction or accidental interference.

Advertisement

Always have headlights on in Sweden

When you’re driving back home in America, you’ll find the rules for car lights are pretty standard and easy to understand. The US only says you need to use your headlights at night, or when you can’t see well because of fog or snowstorms. However, if you’re planning a road trip overseas, you’ll have to quickly adjust your driving to comply with strict local laws that leave nothing to chance.

In Sweden, for instance, traffic rules say your car’s low-beam headlights must be on 24 hours a day, every single day of the year, even when it’s sunny. This is a core part of the Swedish Traffic Ordinance, which sets strict requirements for car lighting to make sure you’re as visible and safe as possible, always. Specifically, the ordinance states that when you’re driving on a road, your car must use its primary beams continuously.

Advertisement

While the law does give you a little flexibility during the day (like letting you use specific daytime running lights instead of regular low beams, which then means your rear and side lights don’t strictly have to be on), some type of approved front light is always legally required whenever your car is running.

Advertisement

Leave your car locked in Austalia

When you park your personal vehicle on a public street in the United States, it just makes sense to roll up your windows, take your keys, and lock your doors before you walk away. However, if you choose to be careless and leave your car completely unlocked with the windows rolled down, no traffic law in the US will penalize you for simply failing to secure your own property. 

This is a big change in New South Wales (NSW), Australia, because securing your vehicle isn’t just a strong recommendation there, it’s a strict legal requirement under Regulation 213 of the Road Rules 2014. In Australia, specifically in NSW, you’re legally obligated to close your windows if you plan to walk more than three meters (about ten feet) away from the closest part of the car.

If there’s no one left inside the vehicle, the law mandates that you must immediately lock the doors after stepping away, and you also need to secure the windows right before you leave. The regulations go even further to prevent an unattended vehicle from being easily stolen or joyridden by mandating that, if you’re going to be away for a while, you must switch off the engine.

Advertisement

Sober passengers are not completely safe in Japan

In the United States, getting into a car with an intoxicated driver is definitely a dangerous and foolish decision. However, from a strictly legal perspective, law enforcement focuses entirely on the driver. If a vehicle is pulled over and the driver is found to be over the legal limit, the driver will be arrested. It is even illegal in some states to have an open container.

If you’re traveling to Japan, you should know that the legal situation for driving under the influence is drastically different and goes way beyond just the driver. Under the Japanese Road Traffic Act, the country takes a really aggressive, zero-tolerance approach to drunk driving. It actively criminalizes anyone who is complicit in that violation. Specifically, Japanese law strictly forbids anyone from asking for, relying on, or even getting into a vehicle to be driven by someone they know is under the influence of alcohol.

Advertisement

This means if you’re a sober passenger and you knowingly ride in a car driven by someone drunk, you’re immediately at risk of serious criminal penalties. If the driver gets caught driving in that state, an aware passenger can face consequences like time in prison with hard labor or a fine.

Advertisement

Avoid driving in historic centers in Italy

If you’re planning a scenic road trip through Italy, you might get surprised by a strictly enforced traffic rule called the Zona a Traffico Limitato, or ZTL. It translates to “Restricted Traffic Zones,” and ZTL laws make it illegal for non-residents to drive through certain historic areas. Cameras will automatically fine any unauthorized vehicle crossing the boundary. You’ll mostly find these restricted zones in the historic town centers of Italy’s famous cities and villages.

These automated systems are placed at many ZTLs; they likely capture license plates, compare them with a database of permitted residents, and send out tickets to violators. Basically, no traffic officer ever has to be there, so don’t think about rat-running in this country.

The U.S. usually doesn’t ban driving in city centers purely for historical preservation or use automated resident-only zones. In the United States, you’re used to driving your car right through busy downtown areas, pulling up to national monuments, and parking on nearby streets. Some American cities might have pedestrian-only areas for shopping or use toll zones to cut down on peak-hour traffic, but an outright, camera-enforced ban on non-resident cars to protect old architecture just isn’t a normal part of driving here.

Advertisement

Don’t accelerate too quickly in Switzerland

Unless you’re accelerating into something or someone, or happen to be in a state that gives out many speeding tickets, the US doesn’t have traffic laws that stop you from starting from a fast drive. American driving culture often accepts a loud engine roaring away from a stoplight; however, taking that same aggressive driving style to Switzerland will quickly put you on the wrong side of the law.

In Switzerland, drivers are legally bound by the Road Traffic Act and the Traffic Regulations Ordinance (TRegO) to make sure they don’t cause any noise nuisance they could avoid. The Swiss Federal Office for the Environment points out that individual vehicles making disruptive commotion peaks stand out from regular road noises. This disturbance is easy to prevent if drivers just change their driving style and avoid making noise-boosting modifications to their cars.

Advertisement

Under Articles 42 of the RTA and 33 of the TRegO, Swiss authorities have clearly listed examples of acoustic disruptions that are prohibited. Drivers can’t rev their engines to high speeds while idling, and they can’t deliberately drive in a low gear just to make the engine roar louder. Those satisfying popping sounds from a performance exhaust system are strictly forbidden on Swiss roads. Most importantly, the racket caused by accelerating too quickly when moving off from a stop is legally classified as an avoidable nuisance and is expressly banned.

Advertisement

Clean your car in Dubai

Dubai Municipality has a campaign to check on and get rid of neglected cars at its nine vehicle registration and testing centers. Now, you might think this is just about hauling away broken-down or abandoned junk; however, it’s actually about strictly enforcing the emirate’s incredibly high visual standards for every personal car. Dubai’s campaign targets these unkempt vehicles, which, when you get down to it, are just dirty cars.

The local government really values urban beauty, cleanliness, and architectural harmony, officially calling visibly unwashed cars a blot on the city’s beautiful image. If you own a car in this sparkling metropolis, you have a legally enforced civic duty to keep its exterior thoroughly washed, polished, and always looking good. To keep things pristine, authorities actively patrol public parking lots, streets, and municipal centers, giving out physical poster alerts and SMS warnings to owners of cars covered in sand, dust, or dirt.

If you don’t fix the situation within the warning timeframe, you’ll face serious penalties, a fine, and possibly have your car impounded at a towing yard. The government uses the specific phrase “clean vehicle sustainable city” to constantly remind residents that car cleanliness is directly connected to the community’s bigger environmental goals.

Advertisement

You need high-visibility gear in France

When you’re driving through France’s pretty landscapes, you might be surprised to find out that its traffic laws require some emergency equipment that’s way more than what you’d expect in North America. The French Highway Code says it’s completely necessary for all drivers to keep a high-visibility safety vest, you know, a yellow vest, and an advance warning triangle right inside their car at all times.

The law is really specific about what these items should be like and how to use them. For example, the safety garment has to be fluorescent, it needs a European Compliance (EC) marking, and you’ve got to keep it somewhere easy to grab inside the car. The U.S. is still passing laws to make sure others treat these emergencies with caution.

Advertisement

This accessibility is a really important detail because if you have to make an emergency stop, French law says you absolutely must put the vest on before you even get out of your broken-down car onto the road or its surroundings. Once you’ve safely exited the vehicle wearing your reflective gear, you’re required to place the hazard warning triangle, which needs an E 27 R marking, at least 30 meters away from your car to warn approaching drivers. You should also turn on your vehicle’s hazard warning lights at the same time.



Advertisement

Source link

Continue Reading

Tech

Metadata company Gracenote is the latest to sue OpenAI for copyright infringement

Published

on

AI companies have been spending a lot of time in court arguing copyright cases over the past year and the latest plaintiff is Gracenote, the metadata company owned by Nielsen. Axios reports that Gracenote is suing OpenAI for the unauthorized and unpaid use of both its metadata and its framework for connecting that information.

Gracenote specializes in entertainment metadata, creating descriptions and identifiers for content that clients such as TV providers use to help their own customers with discovery. Most of the lawsuits against AI businesses have focused on the content used to train LLMs, but the Gracenote case brings an extra layer with the alleged infringement of the structure or sequence for a dataset in addition to the actual data.

“Defendants could have paid Gracenote to license its valuable Gracenote Data. Or they could have sought to train and ground their models only on information in the public domain. They did neither. Defendants instead improperly copied and used Gracenote Data to create their own commercially valuable AI products, all without paying a dime,” the complaint states. The company claims that its previous attempts to work with OpenAI for a licensing agreement were rebuffed or ignored. Gracenote has recently inked deals to back AI ventures from other companies, including Samsung and Google.

Source link

Advertisement
Continue Reading

Tech

NVIDIA is reportedly building an enterprise AI agent platform

Published

on

Sources tell Wired that Nvidia has been pitching ‘NemoClaw’ to Salesforce, Cisco, Google, Adobe, and CrowdStrike ahead of Jensen Huang’s keynote on Monday.

NVIDIA has spent the past several years becoming the indispensable hardware backbone of the AI industry. According to a new report, it may now be trying to become the software backbone too.

The chipmaker is reportedly developing an open-source platform for enterprise AI agents, internally known as NemoClaw. Wired, which broke the story citing anonymous sources familiar with the plans, says Nvidia has begun pitching the product to major enterprise software companies, among them Salesforce, Cisco, Google, Adobe, and CrowdStrike, ahead of a potential launch.

NVIDIA has not confirmed the platform exists, no official partnerships have been announced, and the companies named in the report have not publicly commented.

Advertisement

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

According to those sources, NemoClaw is designed to enable companies to deploy AI agents that carry out tasks on behalf of their employees, processing data, managing workflows, and executing multi-step instructions with limited human oversight. The platform is also reported to include built-in security and privacy tooling, a deliberate response to the wave of incidents that have undermined confidence in consumer-facing agent tools.

When OpenClaw, the open-source local agent framework that went viral in early 2026 before its creator, Peter Steinberger, was hired by OpenAI, was found to have an unsecured database that let anyone impersonate any agent on the platform, several large technology companies, including Meta, moved to ban it from corporate machines entirely. NemoClaw, by all accounts, is being positioned as the enterprise-safe answer to that chaos.

Advertisement

One of the more striking details in the Wired report is that NemoClaw is expected to be hardware-agnostic, usable by companies regardless of whether their infrastructure runs on Nvidia chips. That would be a meaningful strategic shift. NVIDIA’s dominance in AI has historically rested partly on CUDA, its proprietary software layer that has kept developers tethered to NVIDIA’s GPU ecosystem.

An open-source, hardware-neutral agent platform inverts that logic: give away the software layer freely, build the ecosystem, and trust that accelerating enterprise AI workloads will drive GPU demand anyway. It is the same playbook Meta used with Llama, and it worked.

The name itself signals the lineage. ‘Nemo’ connects the platform to NVIDIA’s existing NeMo framework, the foundation for its AI agent development tools, and to the Nemotron family of open models the company has been releasing.

‘Claw’ is a more pointed reference: it situates NemoClaw squarely within the broader ‘claw’ ecosystem of locally-running open-source AI agents that captured the imagination of the technology community this year, and signals that Nvidia sees that trend as a template worth building on, not dismissing.

Advertisement

Because the project is expected to be open source, the reported partnership model would likely offer early access to contributors rather than paid licenses. Sources told Wired that potential partners could gain free early access in exchange for contributing to the project’s development code, resources, or integration work. Whether any of the five named companies have agreed to those terms is not yet known.

The timing of the leak is hard to read as accidental. NVIDIA’s annual GTC developer conference opens in San Jose on Monday, 16 March, with Jensen Huang delivering the keynote from SAP Center at 11 am PT. The conference, which draws more than 30,000 attendees from over 190 countries, is NVIDIA’s primary venue for major platform announcements, and Huang has already telegraphed that agentic AI will be central to this year’s show.

In NVIDIA’s official GTC press release, the keynote is described as covering “open models, agentic systems and physical AI,” setting the direction for the year ahead. A NemoClaw announcement would fit that framing precisely.

The competitive context is equally pointed. OpenAI launched its own agent orchestration product, Frontier, earlier this year. Microsoft’s Copilot stack and Google’s Vertex AI Agent Builder are both targeting the same enterprise deployment problem.

Advertisement

What Nvidia could bring that those players cannot is a combination of hardware credibility, the company whose chips power most of the AI industry, and an open-source neutrality that positions it as a platform any vendor can build on, rather than a competitor trying to lock customers into its own model stack.

Whether NemoClaw becomes the standard, a niche framework, or an announcement that fades quietly into GitHub history depends entirely on execution details that remain unknown: whether it genuinely supports multiple model backends or quietly favours Nvidia-optimised ones, how its agent orchestration compares to what already exists, and whether enterprise IT departments find it meaningfully safer than the consumer tools they have already banned.

Those questions will start being answered on Monday morning, assuming Nvidia confirms the platform exists at all.

Advertisement

Source link

Continue Reading

Tech

Epic is increasing the price of Fortnite’s V-Bucks currency

Published

on

The real world price of impulse-buying Fortnite skins is going up, Epic has announced. Not because skins themselves are getting more expensive on paper, but because V-Bucks, Fortnite‘s digital currency, is. The same prices you paid for bundles of V-Bucks in February will now effectively earn you fewer bucks starting on March 19, along with several other Fortnite-related pricing changes.

Epic will still offer bundles of V-Bucks starting at $8.99 and running all the way to $89.99, but with a new “conversion rate.” The new bundle prices breakdown as follows:

  • $8.99 will get you 800 V-Bucks, down from 1,000 V-Bucks

  • $22.99 will get you 2,400 V-Bucks, down from 2,800 V-Bucks

  • $36.99 will get you 4,500 V-Bucks, down from 5,000 V-Bucks

  • $89.99 will get you 12,500 V-Bucks, down from 13,500 V-Bucks

On top of those changes, the cost of Epic’s “Exact Amount Pack,” which lets you buy the exact amount of V-Bucks necessary to complete a specific purchase, is changing from around $0.50 for 50 V-Bucks to $0.99 for 50 V-Bucks.

These new prices for V-Bucks are US-specific and will vary in other regions. They’re also not entirely representative of the value Epic is offering with each purchase. As part of the company’s Epic Rewards program, you get 20 percent back on purchases made in Fortnite, Fall Guys and Rocket League when you use the Epic Games Store or Epic’s payment system on Android, iOS, PC or the web. That means you can receive anywhere from $1.79 (for 800 V-Bucks) to $17.99 (for 12,500 V-Bucks) to spend in Fortnite or the Epic Games Store when you use the company’s payment system.

Advertisement

Changes to the value of V-Bucks are also impacting Fortnite‘s various passes. The standard Battle Pass will now cost 800 V-Bucks and award 800 V-Bucks, down from its previous price of 1,000 V-Bucks. Meanwhile, the price of the OG Pass (for Fortnite‘s throwback game mode) is lowering from 1,000 V-Bucks to 800 V-Bucks, and both the Music and Lego Passes are going from costing 1,400 V-Bucks to 1,200 V-Bucks. For any subscribers to Fortnite Crew, Fortnite‘s monthly subscription service, your monthly stipend of the digital currency is also shrinking from 1,000 V-Bucks to 800 V-Bucks.

Epic claims that it’s making all of these changes because “the cost of running Fortnite has gone up a lot” and raising prices helps pay the bills, but the company is also in a much better position to make money on every transaction that happens in the game. In securing largely favorable outcomes in its lawsuits against Apple and Google, Epic now has a way to point users to its payment system on iOS and Android (all the better to avoid app store fees), and its won major concessions that seem poised to reshape how app store economies work.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025