Connect with us

Tech

Those White Traffic Lights You May See On Some Roads Aren’t Meant For You

Published

on





If you are in your car, driving through an intersection with traffic lights, you’re accustomed to looking for a few types of signals — usually green, yellow, and red lights or turn arrows. These long-established traffic signals and their colors are second nature for motorists, so when drivers see a traffic light color or pattern out of the ordinary, it can be very confusing, throwing off our familiar patterns.

In recent years, drivers in certain cities may have noticed a separate signal housing on certain traffic lights. This housing displays a white bar that operates independently of the existing lights. Is this some sort of new traffic rule for cars to follow? Nope. Though visible to everyone, the white signals are not for normal motorists or pedestrians; instead, they’re specifically for buses.

Advertisement

The signals are part of an increasingly common, automated Transit Signal Priority system that gives buses the right-of-way at busy intersections. The goal is to avoid buses being held up in normal traffic, helping to speed up transit times for riders while also reducing emissions by reducing the time buses spend idling in traffic. When properly implemented, the signals are just one of the tools that can help bridge the gap between road-going buses and other, more streamlined forms of mass transit. 

Advertisement

Skipping the line

Traditionally, one of the biggest issues with buses as a form of urban transit is that they generally use the same streets as normal cars. While a train or subway operates on its own tracks, buses can be just as susceptible to traffic gridlock and crowded streets as any other vehicle. All of this comes on top of other rules that can impede their travel times, of course, like mandated stops at every railroad crossing

In some urban areas, buses can use their own designated lanes in traffic, which helps speed things up. When they come to intersections, though, they usually have to stop and follow the same signals and traffic flow as every other vehicle. That’s where the white bus signal lights come in.

These separate bus signals, sometimes referred to as “queue jump” lights, have sensors that detect approaching buses and send a signal to the traffic light. When this happens, the white bar in the light moves from horizontal to vertical, signaling to the driver that they can cross the intersection, all while the other lights remain red.

Advertisement

Eliminating confusion

While the bus priority signal system is relatively simple, you can’t just add these signals to any busy intersection and immediately speed up transit times. To work properly and avoid entanglements with normal traffic, the signals need to be used in areas with designated bus lanes. Otherwise, even if given the right-of-way, buses could still be held up by stopped traffic. Some cities have implemented alternative forms of signal priority systems that detect the approach of buses and emergency services, but instead show a green light without dedicated signals or designated bus lanes.

When the white bus signals first debuted, motorists weren’t just confused by the new, unfamiliar white lights themselves. News reports of the time relayed how some drivers would complain after observing city buses seemingly blowing through red lights — only to be told by bus drivers that they were actually following the new, bus-only signal system.

Advertisement

While this signal system might confuse some unfamiliar motorists and observers, the results so far have been positive. They might take some getting used to, but they’re much more benign than the controversial AI technology New York City adopted in 2025, where city buses can use cameras to autonomously issue traffic tickets to cars parking or driving in bus lanes.



Advertisement

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Scientists Just Doubled Our Catalog of Black Hole and Neutron Star Collisions

Published

on

Colliding black holes were detected through spacetime ripples for the first time in 2015 by the Laser Interferometer Gravitational-Wave Observatory (LIGO), notes Space.com:


Since then, LIGO and its partner gravitational wave detectors Virgo in Italy and KAGRA (Kamioka Gravitational Wave Detector) in Japan have detected a multitude of gravitational waves from colliding black holes, merging neutron stars, and even the odd “mixed merger” between a black hole and a neutron star… During the first three observing runs of LIGO, Virgo and KAGRA, scientists had only “heard” 90 potential gravitational wave sources.

But now they’ve published new data from the LIGO-Virgo-KAGRA (LVK) Collaboration that includes 128 more gravitatational wave sources — some incredibly distant:

[Gravitational-Wave Transient Catalog-4.0, or GWTC-4] was collected during the fourth observational run of these gravitational wave detectors, which was conducted between May 2023 and Jan. 2024… Excitingly, GWTC-4 could technically have been even larger, as around 170 other gravitational wave detections made by LIGO, Virgo and KAGRA haven’t yet made their way into the catalog.

One aspect of GWTC-4 that really stands out is the variety of events that created these signals. Within this catalog are gravitational waves from mergers between the heaviest black hole binaries yet, each about 130 times as massive as the sun, lopsided mergers between black holes with seriously mismatched masses, and black holes that are spinning at incredible speeds of around 40% the speed of light. In these cases, scientists think the extreme characteristics of the black holes involved in these mergers are the result of prior collisions, providing evidence of merger chains that explain how some black holes grow to masses billions of times that of the sun… GWTC-4 also includes two new mixed mergers involving black holes and neutron stars.

Advertisement

[LVK member Daniel Williams, of the University of Glasgow in the U.K., said in their statement] “We are really pushing the edges, and are seeing things that are more massive, spinning faster, and are more astrophysically interesting and unusual.” The catalog also demonstrates just how sensitive the LVK detectors have become. Some of the neutron star mergers occurred up to 1 billion light-years away, while some of the black hole mergers occurred up to 10 billion light-years away.
Einstein’s theory of general relativity can be tested with these detections, and “So far, the theory is passing all our tests,” says LVK member Aaron Zimmerman, of the University of Texas at Austin. “But we’re also learning that we have to make even more accurate predictions to keep up with all the data the universe is giving us.” And LVK member Rachel Gray, a lecturer at the University of Glasgow, says “every merging black hole gives us a measurement of the Hubble constant, and by combining all of the gravitational wave sources together, we can vastly improve how accurate this measurement is.”

In short, says LVK member Lucy Thomas of the California Institute of Technology (Caltech), “Each new gravitational-wave detection allows us to unlock another piece of the universe’s puzzle in ways we couldn’t just a decade ago.”

Source link

Advertisement
Continue Reading

Tech

The Moka Pot Is the Best Way to Brew Coffee (2026)

Published

on

Coffee is the original office biohack and the nation’s most popular productivity tool. As we lose sleep to the changeover to daylight saving time, the caffeine-addicted WIRED Reviews team is writing about our favorite coffee brewing routines and devices that’ll keep us alert and maybe even happy in the morning. Today, operations manager Scott Gilbertson expounds on the perfect simplicity of the moka pot. In the days after, we’ll add other Java.Base stories about other WIRED writers’ favorite brewing methods.

Years of travel and a love of repair has given me a special appreciation for simple devices. A pen and paper is still the simplest way to write. A cast-iron pan is the simplest way to cook. And a moka pot is the simplest way to brew coffee.

What I love about the moka pot isn’t just the results I get from it. I do love the flavor, especially when paired with a nice dark, chocolatey, smokey roast, but the moka pot is about more than flavor. It’s also about ingenious simplicity and a design that has lasted nearly a century.

Simple Beginning

Photograph: Scott Gilbertson

The moka pot’s exact origins depend on who you ask, but it was first manufactured and popularized by an aluminum manufacturer named Alfonso Bialetti and his son Renato, who started mass-producing them the same year. Today, Bialetti Industries still makes the Moka Express. The iconic logo image of a short, squat, heavily mustached man is indeed based on Bialetti himself.

Advertisement

If you want some idea of Renato Bialetti’s commitment to the device that made him famous, consider that when he died in 2022, his ashes were interred in a large moka pot. He isn’t the only one who revered the design. The moka pot is featured in museums around the world, including the Museum of Modern Art. Its iconic octagonal shape makes it one of the most recognized coffee brewers in the world.

The moka pot is a pressure-driven stovetop (or campfire top, though this requires close attention) coffee brewer that works something like a percolator. The Moka Express consist of four parts, split into two chambers. The bottom is the water reservoir which heats up on the stove. Into this, you put the brewing basket which holds your grounds. The top consists of a long tube in the center of a holding chamber. On the bottom of the top piece, there’s a metal filter ringed by a rubber (or silicone on some models) gasket. The top and bottom screw together.

As the water heats it passes upwards, through the basket of grounds, and eventually out of the tube. The extraction sits above the grounds and the metal filter keeps everything in place. It’s ingeniously simple.

Source link

Advertisement
Continue Reading

Tech

Karpathy’s March of Nines shows why 90% AI reliability isn’t even close to enough

Published

on

“When you get a demo and something works 90% of the time, that’s just the first nine.” — Andrej Karpathy

The “March of Nines” frames a common production reality: You can reach the first 90% reliability with a strong demo, and each additional nine often requires comparable engineering effort. For enterprise teams, the distance between “usually works” and “operates like dependable software” determines adoption.

The compounding math behind the March of Nines

“Every single nine is the same amount of work.” — Andrej Karpathy

Agentic workflows compound failure. A typical enterprise flow might include: intent parsing, context retrieval, planning, one or more tool calls, validation, formatting, and audit logging. If a workflow has n steps and each step succeeds with probability p, end-to-end success is approximately p^n.

Advertisement

In a 10-step workflow, the end-to-end success compounds due to the failures of each step. Correlated outages (auth, rate limits, connectors) will dominate unless you harden shared dependencies.

Per-step success (p)

10-step success (p^10)

Workflow failure rate

Advertisement

At 10 workflows/day

What does this mean in practice

90.00%

34.87%

Advertisement

65.13%

~6.5 interruptions/day

Prototype territory. Most workflows get interrupted

99.00%

Advertisement

90.44%

9.56%

~1 every 1.0 days

Fine for a demo, but interruptions are still frequent in real use.

Advertisement

99.90%

99.00%

1.00%

~1 every 10.0 days

Advertisement

Still feels unreliable because misses remain common.

99.99%

99.90%

0.10%

Advertisement

~1 every 3.3 months

This is where it starts to feel like dependable enterprise-grade software.

Define reliability as measurable SLOs

“It makes a lot more sense to spend a bit more time to be more concrete in your prompts.” — Andrej Karpathy

Teams achieve higher nines by turning reliability into measurable objectives, then investing in controls that reduce variance. Start with a small set of SLIs that describe both model behavior and the surrounding system:

Advertisement
  • Workflow completion rate (success or explicit escalation).

  • Tool-call success rate within timeouts, with strict schema validation on inputs and outputs.

  • Schema-valid output rate for every structured response (JSON/arguments).

  • Policy compliance rate (PII, secrets, and security constraints).

  • p95 end-to-end latency and cost per workflow.

  • Fallback rate (safer model, cached data, or human review).

Set SLO targets per workflow tier (low/medium/high impact) and manage an error budget so experiments stay controlled.

Nine levers that reliably add nines

1) Constrain autonomy with an explicit workflow graph

Reliability rises when the system has bounded states and deterministic handling for retries, timeouts, and terminal outcomes.

  • Model calls sit inside a state machine or a DAG, where each node defines allowed tools, max attempts, and a success predicate.

  • Persist state with idempotent keys so retries are safe and debuggable.

2) Enforce contracts at every boundary

Most production failures start as interface drift: malformed JSON, missing fields, wrong units, or invented identifiers.

  • Use JSON Schema/protobuf for every structured output and validate server-side before any tool executes.

  • Use enums, canonical IDs, and normalize time (ISO-8601 + timezone) and units (SI).

3) Layer validators: syntax, semantics, business rules

Schema validation catches formatting. Semantic and business-rule checks prevent plausible answers that break systems.

Advertisement
  • Semantic checks: referential integrity, numeric bounds, permission checks, and deterministic joins by ID when available.

  • Business rules: approvals for write actions, data residency constraints, and customer-tier constraints.

4) Route by risk using uncertainty signals

High-impact actions deserve higher assurance. Risk-based routing turns uncertainty into a product feature.

  • Use confidence signals (classifiers, consistency checks, or a second-model verifier) to decide routing.

  • Gate risky steps behind stronger models, additional verification, or human approval.

5) Engineer tool calls like distributed systems

Connectors and dependencies often dominate failure rates in agentic systems.

  • Apply per-tool timeouts, backoff with jitter, circuit breakers, and concurrency limits.

  • Version tool schemas and validate tool responses to prevent silent breakage when APIs change.

6) Make retrieval predictable and observable

Retrieval quality determines how grounded your application will be. Treat it like a versioned data product with coverage metrics.

  • Track empty-retrieval rate, document freshness, and hit rate on labeled queries.

  • Ship index changes with canaries, so you know if something will fail before it fails.

  • Apply least-privilege access and redaction at the retrieval layer to reduce leakage risk.

7) Build a production evaluation pipeline

The later nines depend on finding rare failures quickly and preventing regressions.

Advertisement

8) Invest in observability and operational response

Once failures become rare, the speed of diagnosis and remediation becomes the limiting factor.

  • Emit traces/spans per step, store redacted prompts and tool I/O with strong access controls, and classify every failure into a taxonomy.

  • Use runbooks and “safe mode” toggles (disable risky tools, switch models, require human approval) for fast mitigation.

9) Ship an autonomy slider with deterministic fallbacks

Fallible systems need supervision, and production software needs a safe way to dial autonomy up over time. Treat autonomy as a knob, not a switch, and make the safe path the default.

  • Default to read-only or reversible actions, require explicit confirmation (or approval workflows) for writes and irreversible operations.

  • Build deterministic fallbacks: retrieval-only answers, cached responses, rules-based handlers, or escalation to human review when confidence is low.

  • Expose per-tenant safe modes: disable risky tools/connectors, force a stronger model, lower temperature, and tighten timeouts during incidents.

  • Design resumable handoffs: persist state, show the plan/diff, and let a reviewer approve and resume from the exact step with an idempotency key.

Implementation sketch: a bounded step wrapper

A small wrapper around each model/tool step converts unpredictability into policy-driven control: strict validation, bounded retries, timeouts, telemetry, and explicit fallbacks.

def run_step(name, attempt_fn, validate_fn, *, max_attempts=3, timeout_s=15):

Advertisement

    # trace all retries under one span

    span = start_span(name)

    for attempt in range(1, max_attempts + 1):

        try:

Advertisement

            # bound latency so one step can’t stall the workflow

            with deadline(timeout_s):

                out = attempt_fn()

# gate: schema + semantic + business invariants

Advertisement

            validate_fn(out)

            # success path

            metric(“step_success”, name, attempt=attempt)

            return out

Advertisement

        except (TimeoutError, UpstreamError) as e:

            # transient: retry with jitter to avoid retry storms

            span.log({“attempt”: attempt, “err”: str(e)})

            sleep(jittered_backoff(attempt))

Advertisement

        except ValidationError as e:

            # bad output: retry once in “safer” mode (lower temp / stricter prompt)

            span.log({“attempt”: attempt, “err”: str(e)})

            out = attempt_fn(mode=”safer”)

Advertisement

    # fallback: keep system safe when retries are exhausted

    metric(“step_fallback”, name)

    return EscalateToHuman(reason=f”{name} failed”)

Why enterprises insist on the later nines

Reliability gaps translate into business risk. McKinsey’s 2025 global survey reports that 51% of organizations using AI experienced at least one negative consequence, and nearly one-third reported consequences tied to AI inaccuracy. These outcomes drive demand for stronger measurement, guardrails, and operational controls.

Advertisement

Closing checklist

  • Pick a top workflow, define its completion SLO, and instrument terminal status codes.

  • Add contracts + validators around every model output and tool input/output.

  • Treat connectors and retrieval as first-class reliability work (timeouts, circuit breakers, canaries).

  • Route high-impact actions through higher assurance paths (verification or approval).

  • Turn every incident into a regression test in your golden set.

The nines arrive through disciplined engineering: bounded workflows, strict interfaces, resilient dependencies, and fast operational learning loops.

Nikhil Mungel has been building distributed systems and AI teams at SaaS companies for more than 15 years.

Welcome to the VentureBeat community!

Our guest posting program is where technical experts share insights and provide neutral, non-vested deep dives on AI, data infrastructure, cybersecurity and other cutting-edge technologies shaping the future of enterprise.

Advertisement

Read more from our guest post program — and check out our guidelines if you’re interested in contributing an article of your own!

Source link

Continue Reading

Tech

Restoring A Commodore PET 3032 In Rough Condition

Published

on

The restored PET/CBM 3032. (Credit: Drygol, retrohax.net)
The restored PET/CBM 3032. (Credit: Drygol, retrohax.net)

The Commodore CBM 3032 is a successor to the original Commodore PET 2001, yet due a conflicting trademark issue with Philips these first European PETs were called ‘CBM’ instead. Hence the labeling on the CBM 3032 that [Drygol] had in for a restoration, which would have been produced somewhere between 1979 and the cessation of its manufacturing a few years later. This former machine of the University of Szcezecin in Poland had languished in a basement until a local demoscene group came across it and wanted to use it, after a restoration.

Although at first glance from just the front it didn’t look too shabby, problems were apparent from just a walkaround, including rusty and buckled paneling, showing that the time spent in storage had not done it any favors. Internally there was decades worth of dust, along with a dodgy potentiometer, cold joints and some PCB-level bodges that may or may not have been there from the factory.

The main case was disassembled by drilling out the rivets to gain full access to every nook and cranny, allowing for a good cleaning and repainting prior to putting in fresh rivets. On the PCB side of things, a potentiometer and an LM340KC-12 linear regulator in a TO-3 package had to be replaced, after which the system managed to boot reliably once in every three attempts.

Fixing this took basically cleaning all contacts and IC sockets, as well as refurbishing the keyboard, with corrosion and the occasional broken trace causing a lot of grief. Ultimately the system was restored and ready to be put into demoscene service.

Advertisement

 

Source link

Advertisement
Continue Reading

Tech

DeepRare outperforms doctors in a rare disease diagnosis study

Published

on

DeepRare, an agentic AI system integrating 40 specialised tools, outperformed medical specialists in identifying rare conditions in a head-to-head study published in Nature.


For millions of people with rare diseases, the path to diagnosis is a labyrinth. Patients bounce between generalist GPs and specialists across years, sometimes decades, piecing together symptoms that fall outside textbook presentations.

Eighty per cent of rare diseases have a genetic origin, yet most go undiagnosed until too much biological damage has occurred. The bottleneck is not lack of data, it’s finding the needle in the medical haystack.

A new study published in Nature this month suggests that artificial intelligence may accelerate that hunt. Researchers at Shanghai Jiao Tong University’s School of Artificial Intelligence and Xinhua Hospital developed DeepRare, an AI system designed to mimic how human doctors reason through diagnostic uncertainty.

Advertisement

In a head-to-head comparison with five experienced physicians, each with more than a decade of practice, the system achieved higher accuracy across the board.

The numbers are striking. DeepRare correctly identified the disease on its first suggestion 64.4 per cent of the time, compared to 54.6 per cent for the doctors. When given three suggestions instead of one, the AI system achieved diagnostic success in 79 per cent of cases versus 66 per cent for the human specialists.

Crucially, the physicians endorsed the AI’s reasoning 95.4 per cent of the time, suggesting the system not only reaches correct conclusions, but does so in ways that experienced clinicians find persuasive and medically sound.

What distinguishes DeepRare from earlier diagnostic AI is its architecture. Rather than applying a black-box classification model, the system integrates 40 specialised digital tools and follows an explicitly reasoned workflow.

Advertisement

It forms diagnostic hypotheses, tests them against patient evidence, searches global medical literature databases, analyses genetic variants, and revises its conclusions iteratively before ranking possibilities.

The process mirrors the cognitive steps a human diagnostician takes, but with access to the entirety of medical knowledge and computational speed humans cannot match.

The system has already moved beyond the laboratory. Since July 2025, DeepRare has been deployed on an online diagnostic platform, with more than 600 medical institutions worldwide registered to use it.

The research team plans to validate the system further using 20,000 real-world cases and to launch a global rare disease diagnostic alliance. Notably, the authors emphasise that the system is not intended to replace clinicians, but to augment diagnostic workflows, a safeguard that acknowledges both the technical limits of AI and the irreducible human element in medicine.

Advertisement

The implications for patients are profound. Approximately 300 million people worldwide are affected by rare diseases, and the average diagnostic odyssey stretches to five years or longer.

Each year of diagnostic delay is a year of uncertainty, wrong treatments, and accumulating organ damage. An AI system that can trim weeks or months from that timeline, and surface possibilities that might otherwise be overlooked, could reshape the early experience of living with a rare condition.

Source link

Advertisement
Continue Reading

Tech

Building An Analogue Computer To Simulate Neurons

Published

on

The rapidly-improving speed and versatility of digital computers has mostly driven analogue computers out of use in modern systems, as has the relative difficulty of programming an analogue computer. There is a kind of art, though, in weaving together a series of op-amps to perform mathematical calculations; between this, a historical interest in the machines, and their rarity value, it’s no wonder that new analogue computers are being designed even now, such as [Markus Bindhammer]’s system.

The computer is built around a combined circuit board and patch panel, based on the designs included in three papers in a online library of analogue computer references. The housing around the patch panel took design cues from the Polish AKAT-1 analogue computer, including the two dial voltage indicators and an oscilloscope display, in this case an inexpensive DSO-138. The patch panel uses banana connectors and the jumper wires use stackable connectors, so several wires can be connected to the same socket.

The computer itself has a summing amplifier circuit, a multiplier circuit, an integrator, and square, triangle, and sine wave generators. This simple set of tools is enough to simulate both simple and complex math; for example, [Markus] squared five volts with the multiplier, resulting in 2.5 volts (the multiplier divides the result by ten). A more advanced example is a leaky-integrator model of a neuron, which simulates a differential equation.

Advertisement

We’ve covered a few analogue computers before, as well as a neuron-simulating circuit similar to [Markus]’s demonstration.

Advertisement

Source link

Continue Reading

Tech

The OLED Burn-In Test: Two Years Later

Published

on

After 6,500 hours of heavy productivity use, we revisit our intentionally abused 4K QD-OLED monitor to see how burn-in has progressed under one of the worst-case scenarios for OLED panels.

Read Entire Article
Source link

Continue Reading

Tech

Scientists create a DNA hard drive that could store centuries of data in microscopic volumes without traditional HDD constraints

Published

on


  • University of Missouri researchers claim DNA hard drives can store, erase, and rewrite repeatedly
  • Frameshift encoding converts binary data into DNA sequences for molecular storage
  • Nanopore sensors read DNA sequences by detecting subtle electrical signal changes

The University of Missouri has announced progress on what it calls a “DNA hard drive,” claiming it can store, erase, and rewrite information repeatedly.

Unlike conventional HDDs or cloud storage, which rely on magnetic or solid-state media, this approach leverages the molecular stability of DNA.

Advertisement

Source link

Continue Reading

Tech

How Usable Is Windows 98 In 2026?

Published

on

With the RAM and storage crisis hitting personal computing very hard – along with new software increasingly suffering the effects of metastasizing ‘AI’ – more people than ever are pining for the ‘good old days’. For example, using that early 2000s desktop PC with Windows 98 SE might now seem to be a viable alternative in 2026, because it couldn’t possibly make things worse. Or could it? As a reality check, [SteelsOfLiquid] over on YouTube gave this setup a whirl.

The computer of choice is a very common Dell Dimension 2100, featuring a zippy 1.1 GHz Intel Celeron, 256 MB  of DDR1, and a spacious 38 GB HDD. Graphics are provided by the iGPU in the Intel i810 chipset, all in a compact, 6.9 kg light package. As an early Windows XP PC, this gives Windows 98 SE probably a pretty solid shot at keeping up with the times. At least the early 2000s, natch.

Of course, there is a lot of period-correct software you can install, such as Adobe Photoshop 5, MS Office 97 (featuring everyone’s beloved Clippy), but a lot of modern software also runs, with the Retro Systems Revival blog documenting many that still run on Win98SE in some manner, including Audacity 2.0. This makes it totally suitable for basic productivity things.

YouTube in Netscape 4.5 on Windows 98. (Credit: Throaty Mumbo, YouTube)
YouTube in Netscape 4.5 on Windows 98. (Credit: Throaty Mumbo, YouTube)

Gaming on Win98 is naturally limited to games from around that early 2000s time period or before, but the gaming library even for just Win98 and MS-DOS is pretty massive, so as long as you’re fine not playing the latest and greatest games, this is also pretty easy.

Where things get dicey is of course with using the modern Internet, as you need a modern browser and support for the latest TLS encryption features to not have many websites throw a hissy fit. Using Frog Find and similar proxies that target retro computing help here, fortunately.

Advertisement

Previously we covered ways that you can use Discord even on Windows 95 and Windows NT 3.1, others have ported .NET applications to Windows 9x, got Win98 up and running on a 2020-era system, and you can totally use modern YouTube in even the Netscape 2.x browser using an NPAPI plugin.

Although there are many arguments to be made for using at least a Windows version with an NT kernel over the 9x one, it’s hard to deny that software Back Then™ was less complex, less resource-hungry and still got all the things done. Maybe it is worth another look, before the AI Crisis forces us all back on Windows XP systems like the one featured in this video.

Advertisement

Source link

Continue Reading

Tech

Anthropic says it will sue Pentagon over supply chain risk label

Published

on


The clash centers on two red lines Anthropic refused to drop during negotiations with the Department of Defense: using Claude for mass domestic surveillance of Americans and for fully autonomous weapons. Anthropic says those carveouts are narrow, reasonable, and have not affected any government mission to date. The Pentagon disagreed.
Read Entire Article
Source link

Continue Reading

Trending

Copyright © 2025