Connect with us
DAPA Banner
DAPA Coin
DAPA
COIN PAYMENT ASSET
PRIVACY · BLOCKDAG · HOMOMORPHIC ENCRYPTION · RUST
ElGamal Encrypted MINE DAPA
🚫 GENESIS SOLD OUT
DAPAPAY COMING

Tech

Anthropic joins forces with SpaceX for Colossus capacity

Published

on

Anthropic has also ‘expressed interest’ to co-develop data centres in space, said SpaceX.

Anthropic has agreed to purchase compute capacity from Space X’s AI supercomputer Colossus 1, marking the latest in multibillion-dollar industrial tie-ups between AI companies and infrastructure providers. Terms of the deal between the two companies remain undisclosed.

This, as well as Anthropic’s other recent compute deals, including with Amazon and Google, is allowing the company to increase usage limits for Claude Code and Claude API.

The company announced yesterday (6 May) that it is doubling Claude Code’s five-hour rate limits for Pro, Max, Team and seat-based Enterprise plans; removing the peak hours limit reduction on Claude Code for Pro and Max users; and raising API rate limits “considerably” for Claude Opus models.

Advertisement

Colossus 1 features more than 220,000 Nvidia GPUs, including dense deployments of H100 and H200 chips, and the newer GB200 accelerators.

The SpaceX data centre in Tennessee, US, considered to be one of the world’s largest, came under regulatory scrutiny earlier this year after it was ruled that xAI – now under SpaceX – acted illegally by using methane gas turbines to power Colossus 1 and 2.

Elon Musk’s SpaceX purchased his other company xAI in February to further his goals of creating space-based data centres. The combined entity is called SpaceXAI.

Musk, at the time, said that the merger would allow for data centres to be transported to space in order to harness near-constant solar energy from the Sun. SpaceX’s Starship rockets are expected to begin delivering its next-generation satellites into orbit this year, according to his comments.

Advertisement

As part of its agreement with SpaceX, Anthropic has also “expressed interest” in partnering to develop multiple gigawatts of orbital AI compute capacity, the two jointly said. Both companies are preparing to file historic initial public offerings this year, with Anthropic hitting a $1.2trn implied valuation and SpaceX targeting $1.75trn.

“Grateful to be partnering with SpaceX here. We are going to need to move a lot of atoms in order to keep up with AI demand, and there’s nobody better at quickly moving atoms (on or off planet Earth),” said Anthropic co-founder and chief compute officer Tom Brown in a post on X.

Musk, meanwhile, said people at Anthropic were “highly competent and cared a great deal about doing the right thing”. These comments from Musk come just months after he called Claude “misanthropic and evil” for allegedly “hat[ing]” people of certain races, as well as “heterosexuals and men”.

Earlier this week, Anthropic partnered with investment giants Blackstone, Hellman & Friedman, and Goldman Sachs to form a new AI services company, further cementing itself into the enterprise market as AI inevitably becomes a core productivity tool.

Advertisement

In April, SpaceX said that it had an agreement in place to acquire AI start-up Cursor for $60bn. Cursor plans to leverage the Colossus infrastructure to scale up the intelligence of its models.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

This State Has A Grace Period For Expired Tags (But It’s Not Long)

Published

on





According to CarFax, the start of 2025 saw an estimated 17 million vehicles with expired tags on the road. So, getting caught driving a car with old tags is likely to be somewhat common, statistically speaking. Luckily, some states give drivers a nice little grace period to get their tags taken care of before they start slapping them with penalties. Texas is one of those states.

Texas state law has a grace period of five working days after expiration where it’s technically still legal to drive a car. Because Saturdays, Sundays, and federal holidays are exempt, a driver might be able to stretch that time to seven or eight days. After that, though, the buffer disappears, and law enforcement can start issuing citations right away. 

After the grace period ends, expired registration can cost up to $200, and potentially even more in some counties. Drivers can also get hit with an additional 20% penalty on their registration renewal cost if they received a ticket before renewing. Texas isn’t the only state with some wiggle room here; Florida’s rules on expired registrations, for example, mean that drivers can only be ticketed for an expired registration at the end of their birth month.

Advertisement

How to avoid a penalty for driving with expired tags in Texas

Just because you got a citation doesn’t mean you have to be stuck with it. In Texas, drivers have certain avenues to reduce their penalties and clean up their driving record. For instance, judges can dismiss a driver’s charges if they renew within 20 working days of being cited, as long as it’s before their first court appearance. If this happens, the only thing a driver will be on the hook for is a small administrative fee of $20.

Charges can also be dismissed if a county tax office was closed for an extended period and the registration has not expired for more than 30 working days. This can sometimes be considered a valid legal defense, but it does not guarantee that your charges will be dismissed. A judge will still have to make that call.

Advertisement

If you’re looking to avoid this hassle, the easiest way to is to renew your registration on time. As long as you don’t have a citation, Texas will let you renew online for up to three months before and a full year after expiration. After that, you’ll get a temporary receipt that lets you drive for up to 31 days while you wait for the new sticker to arrive.



Advertisement

Source link

Continue Reading

Tech

Engineer Builds Real-Life Version of Rocky, the Conversational Alien Robot From Project Hail Mary

Published

on

DIY Custom-Built Rocky Project Hail Mary Robot
Project Hail Mary introduced fans to an unforgettable alien named Rocky. Many who finished the book wanted more time with the character and his quirky way of speaking. One maker decided to satisfy that craving by constructing a physical robot that captures the essence of Rocky in every joint and word.



The engineer behind Leviathan Engineering worked tirelessly on this project for months, bringing Rocky to life. He began with digital models of the character purchased from 3D Totems, a store that does an excellent job of creating 3D models that are precisely perfect. Next, software programs such as Fusion 360 and Tinkercad were used to ensure that the parts were not only printable, but also strong enough to endure some hard treatment.

Months later, the printed components came together to create the body of a four-legged monster with arms that appear to lunge out at you in all the right ways. Ten metal-geared servos provide the driving force behind the movements, cleverly placed to allow the robot’s expressiveness to show. The shoulders each receive an additional servo for arm swings, while the knees of the legs receive one each for low crouches. Movement, gestures, and body language pull the alien’s exuberant personality straight out of the novel. He even gets to offer you a full fist bump, or make a wild arm gesture, just like in the novel.

DIY Custom Built Rocky Project Hail Mary Robot
The robot is powered by a Raspberry Pi 5 that is connected to a circuit board known as the PCA9685 HAT, which operates the servos and controls all of their motions. Power comes from an external supply because the motors draw plenty of current during lively movements. Software brings everything to life. It has speech recognition integrated with Vosk, so you can tell it what to do without experiencing internet lag. Then there’s Piper for the robot’s voice, which has that really unique, staccato talking style that we all love about Rocky. For talks, however, it uses Google’s Gemini model, which essentially determines what the robot says and even what gestures to make based on the situation.

DIY Custom Built Rocky Project Hail Mary Robot
The maker wrote all the code using assistance from Claude through its command line interface. No fixed animation scripts exist. Instead the language model chooses movements based on context through a process called tool calling. Ask for a fist bump and the arm extends while the robot says something like “fist bump yes much happy.”

DIY Custom Built Rocky Project Hail Mary Robot
Of course, like any good maker project, this one had some challenges along the way. The engineer experimented with pulleys and linear actuators before settling on servos since they provided him more control over the entire mess. To ensure that printed joints did not break under any force, he had to do some trial and error before getting it perfect. Then there was the enjoyable task of putting things together, a little hot glue here and little super glue there to keep everything from falling apart. Wires route neatly inside the body thanks to extension cables. The final assembly stands about the size of a small tabletop model yet moves with enough grace to feel like a living creature from the pages of the novel.
[Source]

Source link

Advertisement
Continue Reading

Tech

DIY Electrolysis Machine Removes Hair Permanently

Published

on

If you talk to the FDA, there’s only one permanent method of hair removal—electrolysis. This involves sticking a needle into a hair follicle, getting it very hot or running a current through it, and then letting heat and/or the lye generated kill the root of the hair dead. Normally, you’d pay someone with a commercial machine to do this for you at great expense. Or, you could do it yourself with a home-built machine, as [n3tcat] did.

Based on the available information out in the wild, [n3tcat] decided to build a galvanic electrolysis machine. This specifically passes current through a needle in the hair follicle to generate lye at the hair bulb, which kills it. The amount of lye generated depends on the amount of current and the time over which it is applied. More lye is more likely to kill a follicle permanently, though there are limits with regards to avoiding scarring, other skin damage, and excessive pain.

[n3tcat]’s guide explains the basic theory behind galvanic electrolysis, as well as how the rig was built. An early attempt simply involved hooking up a 12-volt car battery to a standard electrolysis needle, sticking it in a hair, with the other electrode being an aluminium can held by the person being treated. The fun thing was that this allowed varying the current depending on how much contact and how stiffly the person grabbed the can.

After a few successful hair removals this way, [n3tcat] decided to build a better rig. An RP2040 microcontroller was enlisted to run the show, powered by a 3.7-volt lithium rechargeable battery. An OLED screen and a rotary encoder were selected to serve as the interface, while a foot pedal was added for firing off current. A boost converter was used to push the battery voltage up to the vicinity of 15 volts for delivery to the needle, set up to avoid excessive current delivery for safety. A DAC was paired with an LM358 op-amp feeding into a MOSFET to control the current passed to the needle for accurate, controlled treatment, with the RP2040 monitoring the current level via a dedicated ADC. The needle itself got a D-printed pen-like handle for better ergonomics, easing the process of slotting the needle into a hair follicle. Everything was then assembled on a cute PCB, and wrapped up in a nice 3D printed housing. The files are available for the curious.

Electrolysis is a process that can cost many thousands of dollars depending on how much hair you hope to remove. Thus, it’s easy to see the appeal in having a rig that lets you do it at home. It’s just one of those things where you have to take the proper precautions to ensure you’re not unduly hurting yourself.  Stay safe out there, hackers!

Advertisement

Source link

Continue Reading

Tech

Meet ZAYA1-8B, a super efficient, open reasoning model trained on AMD Instinct MI300 GPUs

Published

on

Even as leading AI providers like OpenAI and Anthropic battle over the compute to train and release ever larger, more powerful models, other labs are going in a different direction — pursuing the development of smaller, more efficient models and often open sourcing them.

The latest worth paying attention to comes from the lesser-known Palo Alto startup Zyphra, which this week released its new reasoning, mixture-of-experts (MoE) language model, ZAYA1-8B, with just over 8 billion parameters and only 760 million active — far fewer than the trillions estimated for the likes of the big labs. Yet, ZAYA1-8B retains competitive performance on third-party benchmarks against GPT-5-High and DeepSeek-V3.2.

It can be downloaded from Hugging Face now free of charge under a permissive, standard, enterprise-friendly Apache 2.0 license — and enterprises and indie developers can begin using and customizing it immediately to suit their needs. Individual users can also test it themselves here free at Zyphra Cloud, the startup’s inference solution.

But the real headline is what ZAYA1-8B was trained on: a full stack of AMD Instinct MI300 graphics processing units (GPUs), the rival to Nvidia GPUs released by AMD nearly three years ago, and which shows that this platform is capable of producing useful models and is a viable alternative to the preferential position Nvidia has maintained in recent years among AI model developers.

Advertisement

How ZAYA1-8B was trained

The “intelligence density” touted by Zyphra is the result of what they describe as a “full-stack innovation” approach, spanning architecture, pretraining, and reinforcement learning (RL).

ZAYA1-8B is built on Zyphra’s proprietary MoE++ architecture, described in a technical report released by the lab. This architecture introduces three fundamental changes to the standard Transformer architecture that gave rise to large language models (LLMs) and the entire generative AI era:

  1. Compressed Convolutional Attention (CCA): Unlike standard attention mechanisms that struggle with memory as context windows grow, CCA performs sequence mixing in a compressed latent space. This results in an 8x reduction in KV-cache size compared to full multi-head attention, enabling more efficient long-context reasoning.

  2. The ZAYA1 MLP Router: Most MoE models use a linear router to decide which “experts” handle a specific token. Zyphra replaced this with a more expressive multi-layer MLP-based design. To maintain stability during training—a common hurdle for MoEs—they implemented a bias-balancing scheme inspired by PID controllers from classical control theory.

  3. Learned Residual Scaling: This controls the growth of the “residual norm” as data flows deeper into the model’s 40 layers, preventing gradient vanishing or explosion with negligible computational overhead.

Reasoning-First Pretraining

A critical differentiator for ZAYA1-8B is that reasoning was integrated from the start of pretraining, rather than being “bolted on” during post-training.

To handle long chain-of-thought (CoT) traces that would otherwise exceed the initial 4K pretraining context, Zyphra developed Answer-Preserving (AP) Trimming.

Advertisement

Think of AP-trimming like a film editor cutting a long scene: instead of cutting the ending (the solution) or dropping the scene entirely, the editor removes the “middle” of the character’s monologue while keeping the beginning (the problem setup) and the final reveal (the answer).

This ensures the model learns the relationship between complex problems and their solutions even when the full internal logic doesn’t yet fit into memory.

It seemed to work well on my test query about countertop stain removal to ZAYA1-8B running on Zyphra Cloud.

ZAYA1-8B Zyphra Cloud test convo

Screenshot of my conversation with ZAYA1-8B on Zyphra Cloud. Credit: VentureBeat

Advertisement

Markovian RSA: redefining test-time compute

The model’s most significant performance leap comes from Markovian RSA, a novel test-time compute (TTC) methodology.

Traditionally, if you want a model to “think harder,” you let it generate a longer chain of thought. However, this often leads to “context bloat,” where the model loses focus as the history grows too long.

Markovian RSA solves this by decoupling “thinking depth” from “context size”. It functions like a recursive scientific peer-review process:

  • The model generates multiple parallel reasoning traces (candidates).

  • It then extracts only the “tails” (the last few thousand tokens) of these traces.

  • These tails are subsampled and presented to the model in a new “aggregation prompt,” asking it to reconcile the different approaches into a better solution.

By carrying forward only the tails (typically a 4K-token budget), the model can reason indefinitely without the context window ever overflowing. In practice, this allows the 700M active parameter ZAYA1-8B to achieve a 91.9% score on AIME ’25, closing the gap with models that have 30 to 50 times its active parameter count.

Advertisement

Because ZAYA1-8B maintains a small total parameter footprint (8.4B), it is uniquely positioned for on-device deployment and local LLM applications. For enterprises, this enables the deployment of high-tier reasoning capabilities—traditionally reserved for massive cloud-based models—directly onto local hardware or edge devices. This “local-first” reasoning approach addresses common enterprise hurdles regarding data residency, latency, and the high cost of persistent API dependencies.

Benchmarks show a remarkably performant small model that punches above its weight class

Zyphra is positioning ZAYA1-8B as a “punch above its weight” model for developers who need high-tier reasoning without the latency or cost of massive frontier models. After all, its active parameter count is much lower than other similarly-sized models, making it much cheaper and less compute-intensive to run in inference.

Chart comparing active parameter counts of ZAYA1-8B and other similarly sized open models

Chart comparing active parameter counts of ZAYA1-8B and other similarly sized open models. Credit: Zyphra

  • Instruction Following: ZAYA1-8B scores 85.58 on IFEval, remaining competitive with much larger models like Intellect-3 (106B).

  • Agentic Capabilities: On the τ² benchmark, the model reaches 43.12, and 39.22 on BFCL-v4, providing a baseline for its ability to handle tool-calling and multi-turn tasks.

In single-rollout evaluations (without the extra “thinking” time), ZAYA1-8B already outperforms its weight class. It beats Qwen3.5-4B and Gemma-4-E4B on math and code benchmarks.

Advertisement
ZAYA1-8B benchmark comparison chart

ZAYA1-8B benchmark comparison chart. Credit: Zyphra

When Markovian RSA is enabled, the results are startling:

  • HMMT ’25 (Math): ZAYA1-8B hits 89.6%, surpassing Claude 4.5 Sonnet (79.2%) and GPT-5-High (88.3%).

  • LiveCodeBench (Coding): The model achieves 69.2%, outperforming DeepSeek-R1-0528.

Zyphra notes that while the model is a specialist in algorithmic reasoning, it lags slightly behind larger models on “knowledge-heavy” tasks like broad factual retrieval (MMLU-Pro), which suggests that while reasoning can be compressed into smaller cores, factual memory still benefits from raw parameter count.

Apache 2.0 open licensed for research and commercial usage

Zyphra has released ZAYA1-8B under the Apache-2.0 license. This is a critical choice for the developer community. Unlike “copyleft” licenses like the GPL, which require any derived work to also be open-source, Apache-2.0 is highly permissive.

Advertisement

For developers and enterprises, this means they can use, modify, and distribute ZAYA1-8B—even within proprietary, commercial applications—without being forced to open-source their own codebases.

It also includes an explicit grant of patent rights from contributors, providing a layer of legal safety for startups building on top of Zyphra’s architecture. By opting for Apache-2.0 over more restrictive “research-only” licenses often seen from frontier labs, Zyphra is signaling a commitment to the open-weight ecosystem.

To deploy ZAYA1-8B, developers must use specific branches from Zyphra’s forks of core libraries, as the architecture requires specialized handling:

  • Custom Forks: Users should install the zaya1 branch from Zyphra’s versions of the vllm and transformers libraries.

  • Deployment Flags: When starting a vLLM server, specific flags are required to handle the reasoning parser and tool-calling (e.g., --reasoning-parser qwen3 and --tool-call-parser zaya_xml).

  • Parallelism Strategy: For multi-GPU environments, Zyphra recommends using Data Parallelism (DP) combined with Expert Parallelism (EP). Notably, Tensor Parallelism (TP) for the model’s CCA mechanism is not currently supported, making DP+EP the optimal path for scaling inference throughput.

Background on Zyphra

Advertisement

Zyphra: A New Paradigm for Intelligence Density

Founded in 2021 and headquartered in Palo Alto, California, Zyphra Technologies is a full-stack artificial intelligence laboratory dedicated to building human-aligned artificial general intelligence (AGI) — that which outperforms people at most tasks — through a decentralized, open-source framework.

According to the company’s official mission statement, Zyphra seeks to challenge the “centralized” dominance of monolithic cloud models by focusing on “intelligence density”—a core guiding principle that aims to maximize the reasoning and logic extracted per parameter and per FLOP.

Zyphra CEO and Co-Founder Krithik Puthalath explained previously to VentureBeat that this strategy is essential for enabling high-performance AI to run locally on hardware such as tablets, wearable glasses, and enterprise servers, thereby ensuring user privacy and reducing reliance on third-party cloud infrastructure.

The company’s technical identity is deeply informed by computational neuroscience, led by Co-Founder and Chief Scientist Beren Millidge.

Advertisement

According to Millidge’s personal website, he currently serves as a Postdoctoral Researcher at the University of Oxford’s Nuffield Department of Clinical Neurosciences, where his research focuses on deep credit assignment and mathematical models of the brain.

Millidge, who earned his PhD from the University of Edinburgh, has pioneered research into active inference and the “free-energy principle,” concepts that directly influence Zyphra’s pursuit of multimodal architectures capable of long-term memory and continual learning.

This neuroscientific influence was central to the design of Zyphra’s prior Zamba model, released in 2024, which mimics the cortex-hippocampus interaction to share information across sequential layers. A recent TED Talk video provides insight into Millidge’s perspective on the intersection of biological neuroscience and AI, which serves as the theoretical foundation for Zyphra’s model architectures.

Advertisement

Zyphra has achieved significant technical milestones through a deep integration with the AMD hardware ecosystem, as detailed in the company’s research documentation.

Financial data from PitchBook indicates that Zyphra is currently a venture-backed company that attained “Unicorn” status in June 2025 following a $110 million Series A funding round. According to PitchBook and company press releases, Zyphra is supported by a group of strategic investors including Advanced Micro Devices (AMD), IBM, Bison Ventures, and BC VC. With a team of approximately 31 employees as of 2026, the company continues to expand its footprint through the Zyphra Inference Cloud and Maia, an intelligent assistant platform designed to bring advanced search and productivity tools to enterprise teams.

Community reactions and industry context

The announcement has resonated strongly within the AI community, garnering nearly 1 million views on X/Twitter within 24 hours. The excitement largely centers on two factors: the viability of the AMD stack and the efficiency of the reasoning “cascade.”

Technologists have noted that Zyphra’s post-training process—a 4-stage RL cascade—is unusually disciplined. Most labs use a single round of RL, but Zyphra’s pipeline includes a “reasoning warmup” followed by a curriculum of 400 adaptive puzzle-like environments (RLVE-Gym) before finally moving to behavioral polishing.

Advertisement

One of the most praised “under-the-hood” details is Router Replay. In MoE models, training can become unstable if the “trainer” engine and the “inference” engine make slightly different decisions about which expert to use for a token due to floating-point noise. Zyphra’s system records the exact expert choices made during generation and forces the trainer to use them, effectively “pinning” the computation path and ensuring higher learning stability.

As the industry faces a potential plateau in the benefits of simply adding more parameters, ZAYA1-8B provides a compelling counter-narrative: that the next frontier of AI isn’t just about bigger clusters, but about smarter “thinking” algorithms that can do more with less.

Source link

Advertisement
Continue Reading

Tech

Open-source project wants to bring stereoscopic 3D gaming back from the dead

Published

on


The recently unveiled “wiz3D” project is designed to “resurrect” stereoscopic support in older games, allowing them to run with compatible goggles and other stereo display devices. The open-source tool acts as a stereoscopic 3D wrapper, injecting hooks into gaming APIs to generate real-time stereo 3D output on modern Windows systems….
Read Entire Article
Source link

Continue Reading

Tech

Elon Musk’s lawsuit is putting OpenAI’s safety record under the microscope

Published

on

Elon Musk’s legal effort to dismantle OpenAI may hinge on how its for-profit subsidiary enhances or detracts from the frontier lab’s founding mission of ensuring that humanity benefits from artificial general intelligence.

On Thursday, a federal court in Oakland heard a former employee and board member say the company’s efforts to push AI products into the marketplace compromised its commitment to AI safety.

Rosie Campbell joined the company’s AGI readiness team in 2021, and left OpenAI in 2024 after her team was disbanded. Another safety-focused team, the Super Alignment team, was shut down in the same time period.

“When I joined it was very research-focused and common for people to talk about AGI and safety issues,” she testified. “Over time it became more like a product-focused organization.”

Advertisement

Under cross-examination, Campbell acknowledged that significant funding was likely necessary for the lab’s goal of building AGI, but said creating a super-intelligent computer model without the right safety measures in place wouldn’t fit with the mission of the organization she originally joined.

Campbell pointed to an incident where Microsoft deployed a version of the company’s GPT-4 model in India through its Bing search engine before the model had been evaluated by the company’s Deployment Safety Board (DSB). The model itself did not present a huge risk, she said, but the company needed “to set strong precedents as the technology gets more powerful. We want to have good safety processes in place we know are being followed reliably.”

OpenAI’s attorneys also had Campbell admit that in her “speculative opinion,” OpenAI’s safety approach is superior to that at xAI, the AI company that Musk founded that was acquired by SpaceX earlier this year.

Techcrunch event

Advertisement

San Francisco, CA
|
October 13-15, 2026

OpenAI it releases evaluations of its models and shares a safety framework publicly, but the company declined to comment on its current approach to AGI alignment. Dylan Scandinaro, its current head of Preparedness, was hired from Anthropic in February. Altman said the hire would let him “sleep better tonight.”

Advertisement

The deployment of GPT-4 in India, however, was one of the red flags that led OpenAI’s non-profit board to briefly fire CEO Sam Altman in 2023. That incident took place after employees including then-chief scientist Ilya Sutskever and then-CTO Mira Murati complained about Altman’s conflict-averse mangement style. Tasha McCauley, a member of the board at the time, testified about concerns that Altman was not forthcoming enough with the board for its unusual structure to function.

McCauley also discussed a widely-reported pattern of Altman misleading the board. Notably, Altman lied to another board member about McCauley’s intention to remove Helen Toner, a third board member who published a white paper that included some implied criticism of OpenAI’s safety policy. Altman also failed to inform the board about the decision to launch ChatGPT publicly, and members were concerned about his lack of disclosure of potential conflicts of interest.

“We are a non-profit board and our mandate was to be able to oversee the for-profit underneath us,” McCauley told the court. “Our primary way to do that was being called into question. We did not have a high degree of confidence at all to trust that the information being conveyed to us allowed us to make decisions in an informed way.”

However, the decision to boot Altman came at the same time as a tender offer to the company’s employees. McCauley said that when OpenAI’s staff started to side with Altman and Microsoft worked to restore the status quo, the board ultimately reversed course, with the members opposed to Altman stepping down.

Advertisement

The apparent failure of the non-profit board to influence the for-profit organization goes directly to Musk’s case that the transformation of OpenAI from research organization into one of the largest private companies in the world broke the implicit agreement of the organization’s founders.

David Schizer, a former Dean of Columbia Law School who is being paid by Musk’s team to act as an expert witness, echoed McCauley’s concerns.

“OpenAI has emphasized that a key part of its mission is safety and they are going to prioritze safety over profits,” Schizer said. “Part of that is taking safety rules seriously, if something needs to be subject to safety review, it needs to happen. What matters is the process issue.”

With AI already deeply embedded in for-profit companies, the issue goes far beyond a single lab. McCauley said the failures of internal governance at OpenAI should be a reason to embrace stronger government regulation of advanced AI—”[if] it all comes down to one CEO making those decisions, and we have the public good at stake, that’s very suboptimal.”

Advertisement

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

Source link

Continue Reading

Tech

Report: As AI electricity demands soar, Microsoft weighs retreat from ambitious carbon-free energy pledge

Published

on

Microsoft’s Fairwater data center near Atlanta is part of the company’s broader AI expansion. (Microsoft Photo)

Microsoft is considering scaling down or scuttling a pledge to match its electricity use with carbon-free power around the clock by 2030, according to Bloomberg.

As tech companies race to bring more energy-hungry data centers online, their climate targets are growing increasingly difficult to hit. Microsoft has been a vocal leader in climate action, setting ambitious emissions goals and backing carbon-reducing technologies — but that momentum appears to be softening on multiple fronts.

Last month, the New York Times reported that the Redmond, Wash.-based company was pausing its future purchases of carbon removal credits, though company leadership said the program wasn’t ending. Microsoft has been the driving force in that industry, which includes startups that pull carbon from the air or capture it from industrial emissions, nature-based solutions for storing or trapping carbon in soil or rocks.

And following years of announcements celebrating new renewable energy projects, Bloomberg reported in March that Microsoft was in “exclusive talks” with Chevron and Engine No. 1 to develop a gas-powered plant in Texas that would generate electricity for a data center campus.

The company is standing by its sustainability targets.

Advertisement

“Microsoft remains committed to its company goals to be carbon negative, water positive, zero waste, and protect ecosystems. In 2025, we met a milestone on this journey by matching 100% of our annual global electricity consumption with renewable energy,” said Melanie Nakagawa, Microsoft’s chief sustainability officer, in an emailed statement.

Microsoft and cross-town rival Amazon have both hit the goal of matching their total energy use with purchases of an equal quantity of clean power. But in 2021, Microsoft raised the bar by committing to round-the-clock renewable energy matching — a harder target to hit given that sources like wind and solar aren’t always available.

The company even teamed up with Seattle startup LevelTen Energy, Google, and two clean energy companies in 2023 to create a marketplace for organizations pursuing all renewable power 24/7.

Bloomberg, citing unnamed sources, reported that discussions over the tougher energy purchase goal were ongoing, with no final decision reached.

Advertisement

Nakagawa did not directly address the target in her statement, adding that Microsoft continually reviews and adjusts its climate approach “as markets mature, policy environments evolve, and emerging innovative solutions scale.”

“Any adjustments we make are part of our disciplined approach — not a change in our long-term ambition,” she said.

Those ambitions keep getting harder to reach. Microsoft CFO Amy Hood said last month that capital expenditures — which largely fund data centers and hardware — would exceed $40 billion in the current quarter, setting a new record. Total capital spending is expected to hit $190 billion this year.

The computing facilities are the top contributor to Microsoft’s expanding carbon footprint driven by their energy demands and the carbon-intensive steel and concrete required to build them. Microsoft’s carbon impact grew 23.4% from 2020 to 2024, even as the company is still targeting net zero emissions by the end of the decade.

Advertisement

Despite those challenges, Microsoft is still signing clean energy deals. It recently agreed to deploy 1.2 gigawatts of solar and battery projects in Wisconsin with We Energies — roughly half of Seattle City Light’s total generation capacity. The energy is expected to come online in December 2028, a company spokesperson said.

Source link

Continue Reading

Tech

Google’s Screenless Fitbit Air Brings Constant Health Tracking in the Smallest Package Yet

Published

on

Google Fitbit Air Wearable
Google revealed its new Fitbit Air today, and it’s the smallest tracker yet, designed exclusively for people who want some consistent health information without the hassle of a large, flashy device getting in the way. At only a little more than five grams on its own and around 12 grams with a band, this gadget glides onto your wrist and stays there for days on end with no problems. The engineers designed it to resemble a smooth pebble that nestles against the skin, and they managed to incorporate at least 35% recycled material by weight.



You have three band options to choose from right away: a recycled performance loop that has been a game changer so far, a sweatproof silicone active band for more physically demanding activities, and a sleeker elevated modern style that looks almost like real jewelry rather than a wearable. The first thing you’ll notice about this Fitbit Air is how low-key it is; there are no flashing lights, no obnoxious vibrations to disturb your day, and no large display to continuously check the time or stare at notifications.


Google Fitbit Air – Screenless Activity Tracker with Fitness, Heart Rate, and Sleep Tracking…
  • Google Fitbit Air is the unbelievably comfortable, exceptionally smart way to transform your health[1]; and Google Health brings together effortless…
  • Unlock more with Google Health Premium: With a premium membership, get personalized coaching that’s built with Gemini and adapts to your life…
  • Comfortable fit – One Size Tracker (130-210 mm): The lightweight, micro-adjustable fit sits comfortably and quietly, so you can wear Google Fitbit Air…


With no display, you won’t have to worry about your phone ringing every two minutes. Instead, it silently gathers all of this data 24 hours a day, seven days a week and transmits it directly to your phone. Heart rate is continuously recorded, as are irregular heart rhythm alarms, blood oxygen levels, resting heart rate and heart rate variability, as well as the standard sleep stage tracking and workout recognition, all without the need for any fussing. Over time, the system begins to establish these patterns, providing you with a more full picture of your recovery and daily mobility.

Google Fitbit Air Wearable
Power-wise, you can expect it to last a full week on a single charge under normal use, with only a 5-minute top-up required to acquire another day’s worth when you run out of time. Water resistance is also adequate, up to 50 meters, so it can withstand showers, swims, and sweaty workouts without issue. The magnetic charger is easy to attach and reversible, so you won’t have to look for it in the dark. All of these small elements add up to a device that simply sits on your wrist without drawing attention to itself.

Google Fitbit Air Wearable
Data begins to flow into the all-new Google Health app, which now serves as a central hub for all of your Fitbit Air data, but it does more than that! The software collects a variety of different data, such as your phone’s health records or connected medical records, and then displays some excellent daily summaries, a weekly or monthly trend view, and some good ideas for how to improve. There’s also a coach tool built in, powered by Google AI, that generates entirely personalized training regimens depending on your recent sleep quality, heart rate recovery, or even if your schedule has changed. You can ask the coach what’s going on with this fatigue you keep feeling after a trip or how you should alter your exercises when you’re nursing an injury, and it will spew out all sorts of specific steps based on all of the data from this Fitbit Air

Google Fitbit Air Wearable
Pricing maintains the low entry point, at $99.99, which includes three months of Google Health Premium for free. There’s also a Stephen Curry-themed variant that costs $130 and includes a distinctive band design. Orders begin immediately, while the devices are slated to arrive in the United States and a few other locations beginning May 26th. The standard colors are obsidian, fog, berry, and lavender, but availability may vary by region. It’s all good for the latest and greatest Android phones starting with version 11, as well as the latest iPhones running iOS 16.4 or later. Setting it up is simple: all you need is a Google account and the Health app, and the tracker will quickly pair with your device over Bluetooth.

Advertisement

Source link

Continue Reading

Tech

Samsung is pulling TVs and appliances from China after losing $138 million to local competition

Published

on


The South Korean technology giant said it will make every effort to minimize the impact for existing customers, and is reviewing its support infrastructure for business partners. The decision won’t affect Samsung’s other divisions in the region, meaning they will continue to sell products such as smartphones and tablets as…
Read Entire Article
Source link

Continue Reading

Tech

John Roberts Wants You To Stop Believing Your Own Eyes

Published

on

from the it’s-a-true-mystery dept

John Roberts has spent years whining about how totally unfair it is that people claim he and his colleagues rule based on partisan leanings. He did it in 2014. He did it in 2017. He did it in 2019. Hell, he did it a couple months ago too. So it’s little surprise that he’s out there whining about people calling the Court partisan yet again.

Speaking at a conference for lawyers and judges in Hershey, Roberts said the Supreme Court is required to make decisions that are not popular and bemoaned that there is not a better understanding among the public of how the court operates.

“I think at a very basic level, people think we’re making policy decisions, [that] we’re saying we think this is what things should be as opposed to this is what the law provides,” Roberts said. “I think they view us as truly political actors, which I don’t think is an accurate understanding of what we do. I would say that’s the main difficulty.”

While he conceded that people have a right to criticize the court and its decisions, he added that there is a tendency to focus too much on politics.

“We’re not simply part of the political process, and there’s a reason for that, and I’m not sure people grasp that as much as is appropriate,” Roberts said.

Advertisement

The timing here is something else — a week after an obviously partisan ruling in Callais, which stripped away Section 2 of the Voting Rights Act. Notably, Roberts himself had pointed to Section 2’s existence back in 2013 as the reason that they could kill off Section 4 of the Voting Rights Act (which required a pre-review of voting maps for racial bias). And now he helped kill Section 2.

If it were just about making decisions that are “not popular,” then… why are nearly all of his “unpopular” decisions quite clearly in support of one party’s goals and ideology? Any look at the details shows why people conclude that Roberts has a partisan bent to his rulings:

  • In the 15 precedent-overturning cases with partisan implications, in other words, Justice Roberts voted for a conservative outcome 14 times (93%).
  • Chief Justice Roberts is one of only two justices since 1946 to support 100% of decisions overturning precedent that led to conservative outcomes.
  • Roberts’s record in precedent-overturning cases is the second-most conservative among 37 justices who have ruled in at least 5 precedent-overturning cases since 1946. With 84% conservative votes in precedent-overturning cases, Roberts only trails Justice Alito’s 88%.

Gee. I wonder why people think the Court is partisan, chief?

And, on Monday (as we pointed out) Roberts joined Alito and the conservatives on the bench to break standard practice and precedent, supporting Louisiana ripping up its election maps to favor more Republican seats — even as voting had already started — even though, just months ago, he and the conservatives had said that Texas’ map (deemed unconstitutionally based on race by a Trump-appointed judge) couldn’t be torn up because it was “too close” to an election and voters needed “certainty.” There is literally no explanation for December being too close to change the maps while May somehow required rushing a map change… in the same election… other than the partisan leaning of those two decisions.

Indeed, as Liz Dye points out, we have decades of the Supreme Court doing exactly this: it allows for election map changes when it will help Republicans, but says “no can do, too close to an election” whenever it’s expected to help Democrats:

Advertisement

The Court’s conservatives routinely scold lower court judges for changing voting rules too close to an election. This violates the Purcell principle, named for a 2006 case in which the Court rebuked the 9th Circuit for blocking Arizona’s voter ID law too close to an election and causing voter “confusion.” For 20 years, the Supreme Court’s conservatives have selectively invoked Purcell to allow elections to proceed using maps that courts have already deemed to be unlawful.

In 2022, after lower courts struck down Alabama’s electoral map for violating Section 2 of the Voting Rights Act and disenfranchising Black voters, the Supreme Court intervened to allow Alabama to use the unconstitutional map anyway in the midterms. In 2023, the Court agreed that the maps were illegal under the VRA — but only after they’d let Alabama Republicans use them to take back the House.

Just five months ago, the Court cited Purcell when it rebuked a federal district court for “improperly inserting itself into an active primary campaign” by blocking Texas’s unconstitutionally racial gerrymander.

But given the chance to insert themselves into an acting primary campaign, they regularly jump in with both feet. And in fact they’re equally happy to stomp into the primary itself.

So, chief, if you want people to stop thinking the Court is partisan, maybe stop making such obviously partisan decisions.

Advertisement

Oh, and also maybe talk to your colleagues. After all, at the very moment you were whining about people thinking the court was partisan, your colleague Justice Neil Gorsuch was appearing on a famously rightwing podcast to talk about why “young conservatives must have courage to stand by their beliefs.” Sounds kinda partisan.

And just a few weeks ago, your colleague Justice Clarence Thomas gave a speech arguing that progressives were an existential threat to America.

Gosh. Why would the public think some of you are partisan. I wonder!

And, let’s not forget that Thomas’s wife was supportive of the attempt to steal an election from the rightfully elected Joe Biden in support of the failed Republican campaign of Donald Trump. And then there’s Justice Alito’s wife who, somewhat infamously, flew political flags outside their home, including one in support of the January 6th insurrection.

Advertisement

A real mystery, truly. Who could possibly think that there might be partisan bias? How unfair.

But you keep saying how unfair this is. Year after year, conference after conference, the same complaint: people just don’t understand us.

At some point, Chief Justice, the more productive question isn’t why the public doesn’t grasp your supposed non-partisanship. It’s why — after decades of rulings that break almost exclusively in one direction, colleagues who deliver speeches about the courage of young conservatives, and the existential threat of progressivism, and spouses flying insurrection flags — you’re still surprised that they don’t.

Maybe the problem isn’t the public’s understanding. Maybe it’s the Court’s behavior.

Advertisement

Filed Under: clarence thomas, john roberts, partisanship, samuel alito, supreme court

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025