Dario Amodei is not the kind of CEO who talks loosely about numbers. The Anthropic co-founder and chief executive, a former VP of research at OpenAI with a PhD in computational neuroscience from Princeton, has built a reputation for measured public statements — particularly around the financial performance of a company that, until recently, disclosed almost nothing about its business.
So when Amodei took the stage at Anthropic’s Code with Claude developer conference on Wednesday and offered a genuinely striking piece of financial candor, the room paid attention.
“We tried to plan very well for a world of 10x growth per year,” Amodei said during a fireside chat with Anthropic’s chief product officer, Ami Vora. “And yet we saw 80x. And so that is the reason we have had difficulties with compute.”
Anthropic had planned for tenfold growth. But revenue and usage increased 80-fold in the first quarter on an annualized basis, a rate Amodei described as “just crazy” and “too hard to handle.”
Advertisement
The number demands context. Annualized growth rates can overstate sustained performance — a single strong quarter, extrapolated across a full year, can paint a picture that doesn’t hold. Amodei knows this. But the underlying trajectory is not a mirage. Anthropic has crossed a $30 billion annualized revenue run rate, up sharply from roughly $9 billion at the end of 2025, and that growth is being driven largely by enterprise demand. The company’s revenue trajectory has been relentless: $87 million run rate in January 2024, $1 billion by December 2024, $9 billion by end of 2025, $14 billion in February 2026, $19 billion in March, and $30 billion in April.
For context: Salesforce took about 20 years to reach $30 billion in annual revenue. Anthropic did it in under three years from a standing start.
Anthropic’s annualized revenue run rate surged from $87 million in January 2024 to $30 billion by April 2026 — a pace that CEO Dario Amodei said outstripped the company’s own forecasts by a factor of eight. Note: Run-rate figures are annualized snapshots, not full-year GAAP revenue. Log scale used. (Image Credit: Michael Nunez / VentureBeat)
Claude Code became the fastest-growing product in enterprise software history
The growth story at Anthropic is, to a remarkable degree, a single-product story. Claude Code, the company’s agentic AI coding tool launched publicly in mid-2025, has become the fastest-growing product in the company’s history — and, by several measures, one of the fastest-growing software products ever built.
Advertisement
Claude Code hit $1 billion in annualized revenue within six months of launch, and the growth hasn’t slowed down. By February 2026, the product was generating over $2.5 billion in run-rate revenue. The company also said Claude Code’s weekly active users had doubled since January 1 and that business subscriptions had quadrupled since the start of 2026.
The mechanics of the product are straightforward. Claude Code is not a chatbot that suggests snippets. It reads a codebase, plans a sequence of actions, executes them using real development tools, evaluates the result, and adjusts its approach. The developer sets the objective and retains control over what gets committed, but the execution loop runs independently. The average developer using Claude Code now spends 20 hours per week working with the tool.
At Anthropic itself, the majority of code is now written by Claude Code. Engineers focus on architecture, product thinking, and continuous orchestration: managing multiple agents in parallel, giving direction, and making the decisions that shape what gets built.
That last point may be the most revealing detail Amodei disclosed at the conference: this is the first year Anthropic’s own internal pull requests have inflected upward due to Claude’s work on the company’s own codebase. The tool that Anthropic sells to developers is now a material contributor to Anthropic’s own engineering output. That creates a feedback loop that is almost impossible for competitors without a comparable product to replicate — the company is using its own product to build the next version of its own product.
Advertisement
The enterprise numbers tell the same story. The company now counts over 1,000 enterprise customers spending more than $1 million per year on Claude services, a figure that has doubled since February. Much of this increase has been fueled by a wave of corporate customers including Uber and Netflix.
Amodei framed the adoption curve in economic terms. “Software engineers are the ones who are fastest to adopt new technology,” he said on stage. “It’s a foreshadowing of how things are going to work across the economy, and how the economy is going to be transformed by AI.”
Anthropic’s 80x growth created a compute crisis it couldn’t solve alone
Hypergrowth creates its own category of problem. When demand outstrips supply by an order of magnitude, the constraint is not go-to-market strategy or product-market fit. The constraint is physics.
The company is growing so fast that its infrastructure has struggled to keep up, forcing Anthropic into what may be the most unexpected partnership in the current AI cycle. Amodei’s comments came hours after Anthropic announced a deal with Elon Musk’s SpaceX to use all of the compute capacity at his company’s Colossus 1 data center in Memphis, Tennessee. As part of the agreement, Anthropic will get access to more than 300 megawatts of capacity — over 220,000 Nvidia GPUs, including dense deployments of H100, H200, and next-generation GB200 accelerators.
Advertisement
The deal is remarkable for several reasons. Musk has been, until very recently, one of Anthropic’s most vocal critics. He has said Anthropic is “doomed to become the opposite of its name” and wrote in February that “Anthropic hates Western Civilization.” But on Wednesday, Musk changed his tune, saying he spent a lot of time with senior members of the Anthropic team over the past week and that he was “impressed.” “Everyone I met was highly competent and cared a great deal about doing the right thing. No one set off my evil detector,” Musk wrote.
The strategic logic on both sides is clear. xAI’s Colossus 1 ended up with capacity that Grok’s user base never grew into, while Anthropic needs compute immediately. Anthropic has been signing deals with Amazon, Google, Nvidia, and Microsoft for more compute capacity, but most of that isn’t expected to come online until late 2026 or early 2027. The SpaceX deal gives Anthropic a significant boost now — the key word being “now.”
Last month, Anthropic said demand for Claude has led to “inevitable strain on our infrastructure,” which has impacted “reliability and performance” for its users, particularly during peak hours. The company admitted in a postmortem from late April that three bugs had affected Claude Code since March 4, and that internal tests hadn’t caught them, leading to several weeks of degraded performance. Amodei said at the Code with Claude conference that the company is “working as quickly as possible to provide more” capacity and will “pass that compute on to you as soon as we can.”
Advertisement
A near-trillion-dollar valuation makes Anthropic’s IPO the most anticipated debut in years
The growth figures arrive at a moment when Anthropic’s valuation is itself becoming one of the defining financial stories of the AI era.
Anthropic has begun weighing a fresh funding round that would value the company at more than $900 billion, according to people familiar with the matter, potentially leapfrogging its longtime rival OpenAI as the world’s most valuable AI startup. The velocity of the escalation is difficult to overstate. From $61.5 billion in March 2025, to $183 billion by its Series F in September, to $380 billion in February, to, if the current discussions proceed, more than $900 billion in May. Anthropic’s shares were already trading at an implied $1 trillion valuation on secondary markets earlier this month.
Instead of cashing out, many existing investors are waiting to potentially exit during Anthropic’s anticipated IPO later this year. The company is raising what is likely to be its last private round before going public to fund its massive computing needs. Bloomberg has reported that the company is weighing an IPO as early as October 2026, with Goldman Sachs, JPMorgan, and Morgan Stanley already in early discussions.
Anthropic is also building out infrastructure on longer time horizons. Amazon has agreed to invest up to $25 billion in Anthropic, securing up to 5 gigawatts of compute capacity for training and deploying Claude models. Anthropic also secured 5 gigawatts of computing capacity as part of a separate deal with Google and Broadcom that will start to come online next year. The total commitment is staggering — tens of gigawatts of compute across three separate hardware ecosystems: Amazon’s Trainium chips, Google’s TPUs via Broadcom, and Nvidia GPUs through SpaceX and Microsoft Azure.
Advertisement
For perspective: Anthropic’s $30 billion run rate exceeds the trailing twelve-month revenues of all but approximately 130 S&P 500 companies. A company that was essentially pre-revenue in early 2024 now out-earns most of the Fortune 500.
At a $30 billion annualized run rate, Anthropic would out-earn roughly three quarters of S&P 500 companies by revenue — a striking milestone for a company that was essentially pre-revenue in early 2024. Note: Anthropic figure is an annualized run rate, not trailing twelve-month GAAP revenue. (Image Credit: Michael Nunez / VentureBeat)
That comparison comes with caveats. Private-market revenue run rate is not the same thing as audited GAAP revenue, gross margin, free cash flow, or public float. OpenAI has internally argued that Anthropic’s $30 billion figure is overstated by roughly $8 billion, pointing to questions about whether revenues from AWS and Google Cloud should be reported at gross value or net of the partner’s cut. The accounting question will ultimately be resolved when both companies file IPO prospectuses — but even on a net basis, Anthropic’s growth rate is unlike anything in enterprise software history.
Dario Amodei’s vision for AI extends far beyond coding — and he’s given himself a deadline
The financial story — 80x growth, a near-trillion-dollar valuation, a scramble to secure enough GPUs to meet demand — is dramatic on its own terms. But Amodei used his time on stage to place it inside a larger thesis about where AI is headed.
Advertisement
He described a progression from single agents to multiple agents to what he called whole organizational intelligence — from “a team of smart people in a room” to “a country of geniuses in the data center.” The framing is deliberately expansive. What Anthropic is selling today is a coding tool. What Amodei is describing is a future in which entire categories of knowledge work are performed by fleets of AI agents operating in parallel, supervised by humans who define objectives and review outputs.
He reiterated a prediction he made roughly a year ago: that 2026 would see the first billion-dollar company run entirely by a single person. “Hasn’t quite happened yet,” he said. “But we’ve got seven more months.”
The company has also been navigating political headwinds. The Pentagon declared Anthropic a supply chain risk in March, blacklisting it from work with the military. The company has warned the designation could result in billions in lost revenue, with over one hundred enterprise customers reportedly expressing doubts about continuing their relationships.
And yet — as that scuffle makes its way through the legal system, Anthropic is only getting more popular. Amodei said this week he’s eventually hoping for “more normal” expansion.
Advertisement
There is a temptation, when covering a company growing at this rate, to let the numbers speak for themselves. They shouldn’t. Growth at 80x annualized is not a business plan — it’s an emergency. It means demand has outrun infrastructure, that customers want something the company cannot yet reliably deliver at scale, and that every week of constrained capacity is a week during which competitors can close the gap.
The investors funding Anthropic — including SoftBank, Amazon, Nvidia, Google, a16z, Lightspeed, and ICONIQ — are making a specific bet: that compute costs continue to fall per unit of intelligence, that revenue keeps compounding faster than burn, and that whoever owns the AI infrastructure layer in 2029 will generate returns that make the interim losses irrelevant.
Amodei’s candor at Code with Claude was not a victory lap. It was a diagnostic — an admission that his company is running faster than it can steer. He planned for a world of 10x growth and got 80x instead. Now he has seven months to prove that the infrastructure, the organization, and the vision can catch up to the demand. The country of geniuses in the data center is getting crowded. The question is whether anyone remembered to build enough rooms.
Makers will always be chasing a dream, a wild concept, and few of those ideas come close to producing results as this one. The YapStopper 3000 is a device that can detect what someone is saying from across the room, add a tiny bit of delay, then fire that precise audio back at them, to the point that their brain can’t seem to put two meaningful sentences together.
Delayed auditory feedback results in a slight mismatch between what a person says and what they hear a split second later. The brain expects instant input from its own speech, so when there is a delay, the timing is completely thrown off. People stumble over their words, resulting in protracted pauses and fractured thoughts. Typically, getting that effect to work requires some sort of special setup, such as wearing headphones or being in a carefully controlled environment, but our small construct skips all of that and beams the delayed voice directly at the individual who refuses to quiet speaking.
Custom three-capsule array: This professional USB mic produces clear, powerful, broadcast-quality sound for YouTube videos, Twitch game streaming…
Blue VO!CE software: Elevate your streamings and recordings with clear broadcast vocal sound and entertain your audience with enhanced effects…
Four pickup patterns: Flexible cardioid, omni, bidirectional, and stereo pickup patterns allow you to record in ways that would normally require…
You can thank the high-frequency sound for the accurate targeting. The YapStopper emits sound waves at 40 kilohertz, which are far beyond what human ears can detect, but those waves transmit the delayed speech as a type of ‘hidden message’, similar to how a radio station wraps music inside the carrier frequency. Since the wavelength is so short, the sound beam simply zooms in on its target rather of spreading out all over the place, which is made feasible by an array of ultrasonic transducers that basically work together to enhance the beam, resulting in an acoustic spotlight effect. You simply point the built-in laser, turn the switch, and the delayed audio is precisely on target while everyone else hears nothing.
To collect the original audio on the other end, a shotgun mic works well because it listens in from a distance, picks up the speaker’s words clearly, and sends them to the delay circuit. After it has been modified and delayed, the signal is sent out through the transducer array via the ultrasonic carrier. This is powered by a cordless drill battery, which is charged to 24 volts using a boost converter. All of the driver chips, MOSFETs, oscillators, and tuning bits and bobs are housed on a single custom circuit board that fits snugly inside this handy small box. You can adjust the delay and volume on the fly using simple knobs, or flip a switch to turn it on/off.
Months of careful assembly went into every detail. The maker spent five full months debugging, soldering, and replacing parts. Short circuits would sometimes emerge out of nowhere at the worst possible time. Despite all of these obstacles, the underlying principle continued to work. With a simple test using a phone app and some standard headphones, they demonstrated that only the delay may throw speech off track. But what they did with this hardware version was to make it function without the requirement for headphones on the target, which is a real-world feat. More distance testing is still needed, but the prototype demonstrates that directed delivery works in practice once all of the electronics are in sync. [Source]
An anonymous reader quotes a report from 404 Media: On Reddit, Hacker News and other places where people in software development talk to each other, more and more people are becoming disillusioned with the promise of code generated by large language models. Developers talk not just about how the AI output is often flawed, but that using AI to get the job done is often a more time consuming, harder, and more frustrating experience because they have to go through the output and fix its mistakes. More concerning, developers who use AI at work report that they feel like they are de-skilling themselves and losing their ability to do their jobs as well as they used to.
“We’re being told to use [AI] agents for broad changes across our codebase. There’s no way to evaluate whether that much code is well-written or secure — especially when hundreds of other programmers in the company are doing the same,” a UX designer at a midsized tech company told me. 404 Media granted all the developers we talked to for this story anonymity because they signed non-disclosure agreements or because they fear retribution from their employers. “We’re building a rat’s nest of tech debt that will be impossible to untangle when these models become prohibitively expensive (any minute now…).” “I had some issues where I forgot how to implement a Laravel API and it scared the shit out of me. I went to university for this, I’ve been a software engineer for many years now and it feels like I am back before I ever wrote a single line of code,” the software developer at a small web design firm told 404 Media. “It’s making me dumber for sure,” the fintech software developer added.
“It’s like when we got cellphones and stopped remembering phone numbers, but it’s grown to me mentally outsourcing ‘thinking’ in general. I feel my critical thinking and ability to sit and reason about a problem or a design has degraded because the all-knowing-dalai-llama is just a question away from giving me his take. And supposedly I tell myself ill just use it for inspiration but it ends up being my only thought. It gives you the illusion of productivity and expertise but at the end of the day you are more divorced from the output you submit than before.”
A software engineer at the FAANG said: “When I was using it for code generation, I found myself having a lot of trouble building and maintaining a mental model of the code I was working with. Another aspect is that I joined late last year and [the company’s] codebase is massive. As a new hire, part of my job is to learn how to navigate the codebase and use the established conventions, but I think the AI push really hampered my ability to do that.”
B&H is blowing out M4 MacBook Air inventory with prices as low as $829. But supply is limited for this flash deal.
The $829 blowout special applies to the closeout M4 13-inch MacBook Air with an 8-core GPU, 16GB of unified memory, and 256GB of storage when ordered in the Sky Blue finish.
According to B&H, the deal is scheduled to end on May 15 at 5:05 p.m. Pacific Time, but supply is limited at the reduced price, so the deal may sell out before then.
Advertisement
With other retailers sold out, B&H’s flash deal delivers the lowest price available on the closeout model. And if you’re looking for the lowest price on the M5 Air that was released in March 2026, it can be found for as low as $998, although you do get 512GB of storage with the entry model.
B&H is also running a sale on MacBook Pros and upgraded M5 MacBook Airs, which you can jump to via our deal coverage below.
Elon Musk’s xAI is running nearly 50 natural gas turbines at its Mississippi data center, power plants that the state is currently not regulating thanks to a loophole.
The power plants are considered “mobile” by the state of Mississippi because they are sitting on flatbed trailers, thus allowing them to dodge to air pollution regulations for one year. The NAACP, which has filed a lawsuit on behalf of residents in the area, says the unchecked emissions from the turbines is worsening air quality in an already polluted region. This week, it asked the court for an injunction against xAI.
At issue is the “mobile” nature of the turbines. The Southern Environmental Law Center, which filed the lawsuit on behalf of the NAACP, says the turbines are being operated in violation of federal law, which says that power plants mounted on a trailer can still be considered stationary and subject to air pollution regulations.
XAI has been granted permits for 15 of its turbines. A Greater Memphis Chamber of Commerce press release previously said that “about half” of the 35 turbines in operation in May 2025 would remain on site. However, xAI has continued to install more. Currently, it’s operating 46, according to a local news report.
Instagram has never been shy about borrowing ideas, and its latest move makes that clearer than ever. The platform just globally launched Instants, a new feature that lets you share disappearing, unedited photos with your Close Friends or mutual followers.
The standalone Instants app is now available on iOS and Android, which opens directly to the camera when you log in with your Instagram account.
Introducing Instants: the newest way to share photos in real time with your Close Friends (or mutual followers) that disappear after 24 hours and can’t be edited, so you’re sharing your most authentic moments. You can access Instants through @instagram or the new Instants app.…
You can also access this tool directly from the Instagram inbox. Just tap the mini photo stack in the bottom right corner of your DM inbox to open the Instants camera.
Advertisement
Instagram
Either way, you snap something in real time and send it instantly. No uploads from your photo gallery are allowed, and you cannot edit the image before sending. Recipients can react with emoji, reply, or fire back their own Instants.
No one can take screenshots on Instants, and photos vanish after being viewed once, and anything unopened disappears after 24 hours. In fact, anything unopened disappears after 24 hours.
Instagram
If you accidentally send something, there is an undo button to take it back before anyone sees it. Your sent photos are saved in a private archive that only you can access for up to a year. You can also compile them into a recap to post to Stories later.
So which app did Instagram copy this time?
Honestly, take your pick. The disappearing photos and one-time viewing are straight out of Snapchat‘s playbook, which has offered ephemeral photo sharing since 2011. The no-edit, share-as-it-happens format is pure BeReal, an app that briefly took the world by storm by pushing users to post unfiltered photos at random times of the day.
Bastian Riccardi / Pexels
Instants also draws comparisons to Locket, a widget-based app focused on sharing candid photos directly with close friends. But this isn’t new for Instagram because Stories was a direct lift from Snapchat, and Reels borrowed heavily from TikTok. Instants continues that tradition without much apology.
But here’s the thing – it might actually be useful
Instagram
For all the eye-rolling the clone label deserves, Instants taps into something real. Instagram has spent years drifting toward influencer content, brand deals, and algorithmically pushed posts from strangers.
Instants pulls the app back toward what it was originally built for, sharing genuine moments with people you actually know. In a feed full of perfectly lit brand content, a little unfiltered reality is hard to argue with. Whether anyone actually needs it is another question, especially when BeReal never quite held on and Instagram Stories already does the job for most people.
Mary M Hausfeld of the University of Limerick explores how the process by which researchers receive credit for their work can be more complicated for women.
Scientific discoveries rarely happen alone. Modern research often involves teams spanning institutions and even countries. Yet when research is published in academic journals, credit is reduced to a list of names – a list that can shape careers.
Authorship is a key signal of expertise. It influences hiring, promotion and funding decisions. Despite this importance, the process for determining authorship is often far from transparent.
In principle, authorship should reflect intellectual contributions. In practice, decisions about who becomes an author and whose name appears in the most prized position – often first or last – are negotiated within research teams. My research with colleagues has found that women report more negative experiences around authorship decisions.
Advertisement
Norms vary widely across disciplines, and unclear standards combined with power dynamics can create problems, especially for women researchers.
One of these is ghost authorship: when researchers who meaningfully contribute do not receive authorship. Another is gift authorship: when individuals who do not meaningfully contribute are included as authors.
Deciding who gets credit for a research project is complicated, even when everyone has positive intentions. These collaborations can span years, and individual roles often shift over time. Students graduate, researchers move institutions and projects evolve. As a result, authorship decisions are often shaped not just by contributions, but by a set of informal or ‘hidden’ rules that are rarely made explicit.
These hidden rules can include power dynamics between senior and junior researchers. Junior researchers, such as PhD students and postdocs, often depend on supervisors for funding and future opportunities. This can make it difficult to raise concerns about authorship.
Advertisement
The standards for determining contributions may be ambiguous. While there’s recently been more discussion about the different ways someone can contribute to a project, authors may disagree about which contributions matter most. For example, how should writing the paper be weighed against collecting or analysing the data?
Fear of reputational harm could also discourage open discussion about credit. Because researchers are concerned about being labelled ‘difficult to work with’ they may avoid raising concerns about authorship, even when the stakes are high.
Gifts and ghosts
To see how these decisions play out in practice, my collaborators and I surveyed more than 3,500 researchers across 12 countries – one of the largest studies of its kind. We asked researchers about their experiences with disagreement about authorship, comfort discussing authorship in their teams and experiences with problematic authorship practices.
We found that questionable authorship practices are remarkably common. In our study, 68pc of researchers observed gift authorship, and 55pc of researchers observed ghost authorship.
Advertisement
While experiences of authorship were similar across researchers in the natural sciences and social sciences, another pattern emerged. Women researchers reported experiencing more problematic authorship practices in collaborations. They encountered more disagreements over authorship decisions and felt less comfortable raising authorship concerns.
This is especially concerning given what researchers call the “leaky pipeline” in academia – where women are more likely to leave the field or are less likely to progress to senior positions over time. These patterns suggest that the hidden rules of authorship affect women and men differently.
Why it matters
These numbers aren’t just statistics. They represent missed opportunities, strained collaborations and careers quietly knocked off course. Authorship plays a central role in research careers, and even small differences in recognition can accumulate over time. When credit is uneven, opportunities become uneven. This shapes who stays in academia and whose ideas define a field. Over time, this may also push talented researchers away from academic careers or worsen existing inequalities like the leaky pipeline.
Universities rely on collaborative environments that are not only productive, but also fair. Addressing issues with authorship and its hidden rules is essential to continue moving toward better science.
Advertisement
In a separate study of US PhD-granting universities, my colleagues and I found that fewer than 25pc had publicly available authorship policies. Even when policies did exist, they rarely offered guidance on how to handle concerns or resolve conflicts. Clearer institutional guidance and accessible dispute resolution procedures would provide researchers with a framework to more effectively navigate authorship.
In addition, authorship training can encourage earlier and more open conversations about authorship within research teams, particularly for junior researchers who may feel less comfortable raising these issues. Promoting more transparent documentation of individual contributions can help ensure that authorship reflects the work that was actually done, even as roles evolve over the course of a project. Training would clearly benefit early-career scholars, but would also be important for more senior academics who supervise doctoral students and help shape research norms.
When authorship is transparent and openly discussed, it can empower stronger research teams, more equitable career progression and greater trust in the scientific process. Science is a team effort, and our systems for giving credit should reflect that reality.
Mary M Hausfeld is an assistant professor in management, at the University of Limerick. Her research focuses on leadership, diversity at work and research methods. Hausfeld is especially interested in the conceptual and methodological gap between what leaders do and how they are evaluated. Her work has been published in outlets including Journal of Management and others. Before joining UL, Hausfeld served as a post-doctoral research associate and head of education at the Center for Leadership in the Future of Work at the University of Zurich. Hausfeld earned her PhD in organisational science from the University of North Carolina at Charlotte.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
Xbox’s next-gen console might be going fully digital. And if the latest leaks are accurate, Microsoft could finally be preparing the move it almost made more than a decade ago… before the internet collectively lost its mind.
Could Xbox Project Helix completely ditch physical discs?
According to a new report from Windows Central, Xbox is reportedly working on something called “Project Saluki,” which appears to be a new Game Pass initiative designed specifically for the Chinese market. While details remain limited, the report suggests it could involve multiple regional Game Pass tiers and reward systems tailored around China’s unique gaming regulations, spending habits, and player preferences. Considering how important cloud gaming and subscription-based access have become in China, this could be part of a much bigger push for Xbox in the region.
Microsoft / Microsoft
That said, the more interesting part of the report revolves around references discovered inside the Xbox PC app pointing toward a mysterious “Positron” initiative tied to a possible Disc-to-Digital system. Naturally, this has sparked speculation that Microsoft’s upcoming next-gen console, currently known as Project Helix, could launch without a built-in disc drive altogether.
Microsoft
The leaked references suggest Microsoft may be exploring a way for physical game discs to be converted into digital licenses tied to a user’s Xbox account. If true, the idea seems aimed at easing players into an all-digital future without completely abandoning existing physical libraries overnight. Interestingly, Microsoft explored similar concepts during the Xbox One era, but backlash around digital ownership and always-online systems forced the company to back away at the time. The difference now is that the market has changed dramatically, with digital purchases and subscription gaming becoming the norm for a huge portion of console players.
Xbox / Microsoft
And honestly, Microsoft has been building toward this for years anyway. The Xbox Series S launched as a fully digital console back in 2020, followed by the all-digital white Xbox Series X refresh in 2024. At this point, a disc-less Project Helix would feel less like a surprise and more like the next logical step in Xbox’s long-term Game Pass-focused strategy.
Project Helix may finally push Xbox into its all-digital era
Reports around Project Helix already suggest Microsoft is positioning the next Xbox more like a hybrid gaming platform, blending console simplicity with PC-style flexibility through support for Xbox libraries, Windows features, Steam, and cloud gaming. In that kind of ecosystem, physical discs start feeling increasingly outdated. Even PlayStation reportedly now sees most game sales happening digitally, while Xbox has spent years pushing Game Pass, Cloud Gaming, and Play Anywhere.
Microsoft
Ironically, Microsoft almost tried this exact shift back during the Xbox One era, when digital licenses and always-online requirements triggered massive backlash. But the market has changed dramatically since then. Today, most players already buy their games digitally, which makes a disc-less future feel far more realistic. It would not be surprising if both Xbox and Sony eventually ship fully digital next-gen consoles, potentially with optional external disc drives similar to the PS5 setup. The difference is that Sony benefits from Blu-ray ownership, while Xbox would still have to deal with licensing costs.
PlayStation
Of course, players are not exactly going to celebrate the death of physical games overnight. Going digital is easy for Microsoft. Convincing gamers that they are not losing ownership, flexibility, or preservation in the process is the harder part, especially at a time when Xbox is already trying to rebuild momentum against Sony. That said, these leaks are still very early, and even the original report suggests details are still being pieced together, so for now, this entire situation should be taken with a healthy amount of caution.
Von der Leyen stated that the EC – one of the European Union’s highest governing bodies – is taking action against TikTok and Meta’s social media platforms, including Facebook and Instagram. The video-sharing platform and Meta’s services are said to rely on engagement-driven features such as endless scrolling, auto-play, and… Read Entire Article Source link
John Roberts has a point: the Supreme Court—even this Supreme Court—sometimes gets things right. Maybe one could even fairly say it often gets things right. After all, just recently it produced good decisions in Case v. Montana, Cox v. Sony, and First Women’s Choice Centers v. Davenport, and arguably even Chiles v. Salazar, along with plenty more that have quietly taken their place in the annals of American jurisprudence with little fanfare but the staying power we look to the Court’s opinions for, to continue to speak well into the future about the contours of our law. These were decisions where there was significant accord among all the justices because the legal questions before them were just not that hard to resolve. Either statutory language, constitutional text, or previous precedent required certain results, and Roberts is correct: this Court is fully capable of producing them.
The issue, however, is that it doesn’t always. And when it doesn’t it is not because it’s getting tripped up by close calls where either the precedent or guiding text isn’t clear, or the facts are so unfortunate that they obscure what the law requires. The issue is that the law is as equally clear in cases where the Court produces deviant results as in the cases where the Court gets things right; it just doesn’t care to follow it consistently. If it wants a different result than what the law directs then that is the result it will find the votes for.
Roberts is of course also right that non-lawyers often can’t tell what the law indeed requires; the general public is much more likely to judge a decision based on how it affects the interests they favor. Which is why Roberts has a fair point to think the Court may be unfairly criticized in decisions like Chiles, First Women’s Choice Centers, or even 303 Creative, cases where interests many understand to be harmful to others nevertheless apparently prevailed. It is difficult, for instance, for non-lawyers to see how a win for those who discriminate is nevertheless a win for those who are discriminated against, because while a win for the former may seem like a loss for the latter in the short term, it’s the rationale being upheld by the decision that will ultimately amount to a more important gain for the vulnerable in the long term.
But one reason people are struggling to see these controversial but correct decisions as fortifications of their own future freedom is because they don’t believe that when their interests are at stake the Supreme Court will still apply the same principles this time in their favor. They fear that the Court will instead find a way to advance the interests it prefers, and it’s a fear that is eminently reasonable. The hypocrisy the justices regularly display in their jurisprudence when one of their favored interests is at stake forecloses any rational person having any faith in them as neutral jurists ably applying the law, even if it’s true that sometimes they are.
Advertisement
Roberts only has himself and his Court to blame for so many having that view. They have made it impossible for anyone to believe the Court will uphold principle and precedent because of how often it has not. It is happy to change the rules that we must all play by whenever it suits it, redrawing the rights we depend on as well as the ability to use the courts to shape them. And it’s not just laypeople who’ve noticed the problem but legal professionals. It’s lawyers, including members of the Supreme Court Barwho practice before them. It’s law professors, including those who have been teaching new generations of law students what were supposed to be timeless principles of American jurisprudence, which the Court so regularly and casually upends. It’s legal commentators, including those who specialize in watching this court. It is people who are experienced, if not expert—and if not at least as expert as anyone on the Court—in the American legal tradition who are calling foul. They are noticing how the Court keeps inventing arbitrary and imaginary rules, if not also facts, in order to arrive not where the law points but where the conservative justices steering the Court’s majority instead prefer to go.
It might be one thing if it were the rare case here and there in its busy docket where the Court has simply been sloppy in its jurisprudence. But the cases where the conservative majority has refused to produce jurisprudentially conservative results, instead elevating preferred outcomes over precedential reasoning, are hardly the exception; at this point it has become the apparently deliberate rule that when certain issues are on the table—partisan politics, reproductive freedom, LGBTQ+ rights, race relations, to name just a few areas where the conservative justices have particularly strong views—the Roberts Court will eagerly jump in to advance them, regardless of whether either substance or procedure—or consistency—even invites such an intervention, let alone their favored result. In fact it is fairly shocking to encounter the rare occasion where the Court has instead restrained itself—although it is certainly glad to when other interests the conservative majority is less dogmatically interested in advancing are instead on the table.
Furthermore, that its docket is so busy is entirely because the Court has abdicated any pretense of restraint, greedily helping itself to matters that historically would have been regarded as unripe for its consideration. In fact, it is a bit rich for Roberts to complain how the Supreme Court is being unfairly disrespected given the extent to which its new practice of aggressively insinuating itself in substantive adjudication of matters before there even is a lower court ruling or record ready for review has itself undercut the respect due the lower courts. What the Court has been doing, particularly with its Shadow Docket, goes far beyond the appellate review it is normally entitled to do. Not only does the Supreme Court’s incessant snatching of matters away from the lower courts prematurely arbitrarily diminish the lower courts’ power to render considered opinions on the questions before them, but it has also been having the practical effect of undermining their ability to speak with any authority on the law at all, let alone enforce it. Would only Roberts shed the same tears for the insult the lower courts have actually suffered as he does for himself as the cause of it.
Instead, and apparently without any capacity for introspection or self-reflection, he protests that the criticism increasingly directed at the Court is not also increasingly deserved. We should, he insists, be judging his Court based on what it gets right. But we do not celebrate a reckless driver for all the people he didn’t run over, or careless chef for all the diners he didn’t poison, or distracted doctor for all the patients he didn’t kill. In the American legal tradition we judge harshly those who cause injury to the public well-being, especially with behavior beyond the bounds of what law allows.
Advertisement
And with the Roberts Court there is so much to judge.
Morgan Wandell, who has been with Apple TV since before its launch, is now departing the streaming service in favor of launching his own production company.
In 2017, Apple poached Wandell from Amazon Studios to join its team at Apple Worldwide Video. When Apple TV launched in 2019, his title became Head of International Content Development.
While at Apple, Wandell developed and oversaw production of “Monarch: Legacy of Monsters,” “Tehran,” “Disclaimer,” “Masters of the Air,” and “The New Look.”
Now, it seems as though he’s got other plans. Wandell plans on leaving Apple TV to found his own production company, Kismet.
Advertisement
Kismet will develop and produce premium scripted series for the global marketplace. Its offerings will focus on high-end culturally rooted storytelling.
While he is technically leaving his executive role behind, it seems that he may not be leaving Apple TV entirely. He’s currently in talks with Apple to stay on as a producer on some of his existing projects.
“Helping to build Apple TV’s international slate has been the privilege of my career,” Wandell told Deadline.
“I’m deeply grateful to Jamie [Erlicht], Zack [Van Amburg], and all my colleagues at Apple, and to the extraordinary creators we’ve partnered with around the world. It was a hard personal decision to make this leap from a company as terrific as Apple, but I have always wanted to build a company of my own.”
Advertisement
Matt Cherniss, Apple TV’s Head of Programming and Domestic Development, will take over the Monarch franchise and other series that were under Wandell’s purview. Cherniss currently oversees other hit series, such as “Ted Lasso,” “Severance,” “The Studio,” and “Pluribus.”
Jay Hunt, Apple TV’s creative director, Europe, will see her role expand to oversee international and local-language originals. She is in charge of British staples “Slow Horses” and “Hijack”, among others.
Before his tenure at Apple, Wandell worked as Head of International Series and Head of Drama Series at Amazon Studios for four years. Before that, he acted as Senior Vice President of Drama at ABC studios, overseeing series including “Lost,” “Grey’s Anatomy,” “Brothers and Sisters,” “Ugly Betty,” and “Criminal Minds.”
You must be logged in to post a comment Login