Connect with us
DAPA Banner

Tech

After Outages, Amazon To Make Senior Engineers Sign Off On AI-Assisted Changes

Published

on

An anonymous reader quotes a report from the Financial Times: Amazon’s ecommerce business has summoned a large group of engineers to a meeting on Tuesday for a “deep dive” into a spate of outages, including incidents tied to the use of AI coding tools. The online retail giant said there had been a “trend of incidents” in recent months, characterized by a “high blast radius” and “Gen-AI assisted changes” among other factors, according to a briefing note for the meeting seen by the FT. Under “contributing factors” the note included “novel GenAI usage for which best practices and safeguards are not yet fully established.”

“Folks, as you likely know, the availability of the site and related infrastructure has not been good recently,” Dave Treadwell, a senior vice-president at the group, told employees in an email, also seen by the FT. The note ahead of Tuesday’s meeting did not specify which particular incidents the group planned to discuss. […] Treadwell, a former Microsoft engineering executive, told employees that Amazon would focus its weekly “This Week in Stores Tech” (TWiST) meeting on a “deep dive into some of the issues that got us here as well as some short immediate term initiatives” the group hopes will limit future outages.

He asked staff to attend the meeting, which is normally optional. Junior and mid-level engineers will now require more senior engineers to sign off any AI-assisted changes, Treadwell added. Amazon said the review of website availability was “part of normal business” and it aims for continual improvement. “TWiST is our regular weekly operations meeting with a specific group of retail technology leaders and teams where we review operational performance across our store,” the company said.

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

The Best iPad to Buy (and Some to Avoid) in 2026: Compare the Air, Pro, Mini

Published

on

Great iPad Accessories

iPad accessories are endless. Below, we’ve highlighted some of our favorites to round out your tablet experience, and you can find more in our Best iPad Accessories guide.

Zugu Case for $50+: This is our favorite folio case for the iPad for multiple reasons. It’s not only durable (complete with a rigid bumper), but it also has a magnetized cover that stays shut and a flap that allows you to position the screen at eight different angles. The case is magnetic, allowing you to stick it on the fridge securely. It’s also reasonably priced, comes in an array of colors, and has a spot for your Apple Pencil.

Satechi M1 Wireless Mouse for $25: We’re already big fans of Satechi’s accessories at WIRED, and this mouse didn’t disappoint. It has a comfortable ergonomic design, a sleek aluminum finish, and smooth scrolling. It has great battery life too—with a built-in lithium-ion battery, I’ve been using it for the past four months and have yet to charge it.

Advertisement

Mageasy CoverBuddy Case (iPad Pro) for $70: This case allows you to magnetically connect it to Apple’s Magic Keyboard case without having to take off the case each time. It feels durable and doesn’t add too much bulk to the iPad. There’s also a slot for the Apple Pencil Pro or the USB-C version. The company also offers the CoverBuddy Lite for the iPad Air (M2).

Logitech Combo Touch a black tablet propped up on a kickstand white attached to a black detachable keyboard

Photograph: Brenda Stolyar

Logitech Combo Touch (10th-Gen) for $260: The Combo Touch (8/10, WIRED Recommends) comes with a built-in keyboard, trackpad, and kickstand, making it ideal for getting work done on your iPad. It’s also detachable, so you can easily remove the keyboard when you don’t need it. It connects via Apple’s Smart Connector, meaning you never need to tinker with Bluetooth or bother charging it. It’s also available for the iPad Pro (M4) and M5 (although it does add a bit of weight to such a thin tablet) and the iPad Air (M2).

Advertisement

Casetify Impact Screen Protector for $56: If you’re worried about damaging your iPad screen, I recommend this protector from Casetify. It’s super thin, has excellent touch sensitivity, and is mostly fingerprint-resistant (I’ve wiped some smudges here and there). It’s painless to apply—the company supplies a microfiber cloth, a de-dusting sticker, and wet and dry wipes.

Paperlike Charcoal Folio Case for $70: Paperlike is known for its screen protector, but the company also offers a great case. It’s designed to feel like a sketchbook, complete with a polyester fabric cover that feels lightweight and high-quality. You can also prop your iPad up at two different levels. It doesn’t come with an Apple Pencil slot, but there is a large flap closure that keeps it from falling out. I tested it with the iPad Air, but it’s also available for the iPad Pro (both sizes).

Twelve South StayGo Mini USBC Hub

StayGo Mini

Advertisement

Courtesy of Twelve South

Twelve South StayGo Mini USB-C Hub for $60: Ports are limited regardless of the iPad model. This hub from Twelve South has an 85-watt USB-C port with passthrough charging, a USB-A port, an HDMI port, and a headphone jack. If you have trouble fitting it on an iPad with a case, the included socket-USB-C-to-plug-USB-C cable will fix this.

Apple Magic Trackpad (USB-C) for $140: For a spacious trackpad, the Magic Trackpad 2 is a great choice. Instead of physical buttons, it has Force Touch sensors where you can feel different levels of pressure on the pad. With support for various iPadOS gestures, you won’t have to touch the screen as much. It automatically pairs with your iPad via Bluetooth and recharges with the Lightning port.

Twelve South HoverBar Duo 2.0 for $80: The HoverBar serves two purposes. You can mount it to the side of your bed, kitchen counter, or shelf (to view content comfortably and hands-free), or you can use the included stand at your desk. With the 2nd-gen version, you can now remove the arm from the clamp and attach it directly to the stand, making it easier to swap between both modes.


Power up with unlimited access to WIRED. Get best-in-class reporting and exclusive subscriber content that’s too important to ignore. Subscribe Today.

Advertisement

Source link

Continue Reading

Tech

NYT Connections hints and answers for Saturday, April 4 (game #1028)

Published

on

Looking for a different day?

A new NYT Connections puzzle appears at midnight each day for your time zone – which means that some people are always playing ‘today’s game’ while others are playing ‘yesterday’s’. If you’re looking for Friday’s puzzle instead then click here: NYT Connections hints and answers for Friday, April 3 (game #1027).

Good morning! Let’s play Connections, the NYT’s clever word game that challenges you to group answers in various categories. It can be tough, so read on if you need Connections hints.

Advertisement

Source link

Advertisement
Continue Reading

Tech

PS6 might be closer than you think, and it’s not coming alone

Published

on

Just when things had started to feel quiet on the PlayStation front, a fresh wave of leaks has stirred the pot again. There’s chatter around the PlayStation 6, a next-gen handheld, and even some behind-the-scenes changes that hint at how Sony is preparing for what’s next.

None of this is official, of course, but even if part of it holds up, Sony isn’t just building new hardware, but also laying the groundwork for how that hardware will actually work.

What do the latest PlayStation leaks actually say?

According to trusted leaker Moore’s Law is Dead, the biggest headline is around the PlayStation 6, which may not be as far away as expected. Early details suggest that Sony is already deep into development, with timelines hinting at a launch window that’s closer than the typical console cycle would suggest.

But that’s only part of the story. Alongside the PS6 chatter, there’s renewed talk of a dedicated PlayStation handheld. Unlike the PlayStation Portal, which is more of a remote-play device, this new handheld is rumored to be a standalone system capable of running games natively. Think of it as the new PSP or PS Vita.

Another interesting detail is around “PlayGo,” which has reportedly been introduced in the latest PS5 SDK. Think of it as Sony’s version of Xbox’s Smart Delivery. It allows developers to break games into smaller chunks, so each device only downloads the assets it actually needs. That means a standard PS5 wouldn’t need to download higher-resolution textures meant for a PS5 Pro, and potentially, future devices could follow the same logic.

PS6 pricing leaks sound surprisingly… reasonable

According to MLID, now might not be the best time to drop $900 on a PS5 Pro. The claim is pretty bold, but they suggest skipping the current-gen upgrade and waiting, because the base PlayStation 6 could actually end up being cheaper than the PS5 Pro. The reasoning? Sony is reportedly designing the PS6 from the ground up to be more cost-efficient, with cheaper cooling, power delivery, and overall manufacturing.

Advertisement

In fact, some estimates even suggest a bill of materials around $750, which could keep the final price comfortably below $1,000. That’s actually quite cheaper, compared to Microsoft’s upcoming Project Helix, which could go up to $1200. Then again, these are still early leaks and far from official, so it’s worth taking all of this with a pinch of salt for now.

Source link

Advertisement
Continue Reading

Tech

Microsoft To Invest $10 Billion In Japan For AI, Cyber Defense Expansion

Published

on

Microsoft plans to invest $10 billion in Japan from 2026 to 2029 to expand AI infrastructure, boost local cloud capacity, train 1 million engineers and developers, and deepen cybersecurity cooperation with the Japanese government. Reuters reports: The investment includes the training of 1 million engineers and developers by 2030, Microsoft said, which was unveiled during a visit to Tokyo by Vice Chair and President Brad Smith. In a statement, the company said the plan aligns with Prime Minister Sanae Takaichi’s goal to boost growth through advanced, strategic technologies while safeguarding national security.

Microsoft will work with domestic firms including SoftBank and Sakura Internet to expand Japan-based AI computing capacity, allowing Ecompanies and government agencies to keep sensitive data within the country while accessing Microsoft Azure services, it said. It will also deepen cooperation with Japanese authorities on sharing intelligence related to cyber threats and crime prevention.

Source link

Continue Reading

Tech

Can Agentic AI Coding Tools Finally End Copyright For Software While Re-Inventing Open Source?

Published

on

from the reinventing-software dept

Most of the discussions about the impact of the latest generative AI systems on copyright have centered on text, images and video. That’s no surprise, since writers, artists and film-makers feel very strongly about their creations, and members of the public can relate easily to the issues that AI raises for this kind of creativity. But there’s another creative domain that has been massively affected by genAI: software engineering. More and more professional coders are using generative AI to write major elements of their projects for them. Some top engineers even claim that they have stopped coding completely, and now act more as a manager for the AI generation of code, because the available tools are now so powerful. This applies in the world of open source software too. But a recent incident shows that it raises some interesting copyright issues there that are likely to affect the entire software world.

It concerns a project called chardet, “a universal character encoding detector for Python. It analyzes byte strings and returns the detected encoding, confidence score, and language.” A long and detailed post on Ars Technica explains what has happened recently:

The [chardet] repository was originally written by coder Mark Pilgrim in 2006 and released under an LGPL license that placed strict limits on how it could be reused and redistributed.

Dan Blanchard took over maintenance of the repository in 2012 but waded into some controversy with the release of version 7.0 of chardet last week. Blanchard described that overhaul as “a ground-up, MIT-licensed rewrite” of the entire library built with the help of Claude Code to be “much faster and more accurate” than what came before.

Licensing lies at the heart of open source. When Richard Stallman invented the concept of free software, he did so using a new kind of software license, the GPL. This allows anyone to use and modify software released under the GPL, provided they release their own code under the same license. As the above description makes clear, chardet was originally released under the LGPL – one of the GPL variants – but version 7.0 is licensed under the much more permissive MIT license. According to Ars Technica:

Advertisement

Blanchard says he was able to accomplish this “AI clean room” process by first specifying an architecture in a design document and writing out some requirements to Claude Code. After that, Blanchard “started in an empty repository with no access to the old source tree and explicitly instructed Claude not to base anything on LGPL/GPL-licensed code.”

That is, generative AI would appear to allow open source licenses like the GPL to be circumvented by rewriting the code without copying anything directly from the original. That’s possible because AI is now so good at coding that the results can be better than the original, as Blanchard proved with version 7.0 of chardet. And because it is new code, it can be released under any license. In fact, it is quite possible that code produced by genAI is not covered by copyright at all, for the same reason that artistic output created solely by AI can’t be copyrighted. If the license can be changed or simply cancelled in this way, then there is no way to force people to release their own variants only under the GPL, as Stallman intended. Similarly, the incentive for people to contribute their own improvements to the main version is diminished.

The ramifications extend even further. These kind of “AI clean room” implementations could be used to make new versions of any proprietary software. That’s been possible for decades – Stallman’s 1983 GNU project is itself a clean-room version of Unix – but generally requires many skilled coders working for long periods to achieve. The arrival of highly-capable genAI coding tools has brought down the cost by many orders of magnitude, which means it is relatively inexpensive and quick to produce new versions of any software.

In effect, generative AI coding systems make copyright irrelevant for software, both open source and proprietary. That’s because what is important about computer code is not the details of how it is written, but what it does. AI systems can be guided to create drop-in replacements for other software that are functionally identical, but with completely different code underneath.

Companies that license their proprietary software will probably still be able to do so by offering support packages plus the promise that they take legal responsibility for their code in a way that AI-generated alternatives don’t: businesses would pay for a promise of reliability plus the ability to sue someone when things go wrong. But for the open source world these are not relevant. As a result, the latest progress in AI coding seems a serious threat to the underlying development model that has worked well for the last 40 years, and which underpins most software in use today. But a wise post by Salvatore “antirez” Sanfilippo sees opportunities too:

Advertisement

AI can unlock a lot of good things in the field of open source software. Many passionate individuals write open source because they hate their day job, and want to make something they love, or they write open source because they want to be part of something bigger than economic interests. A lot of open source software is either written in the free time, or with severe constraints on the amount of people that are allocated for the project, or – even worse – with limiting conditions imposed by the companies paying for the developments. Now that code is every day less important than ideas, open source can be strongly accelerated by AI. The four hours allocated over the weekend will bring 10x the fruits, in the right hands (AI coding is not for everybody, as good coding and design is not for everybody).

Perhaps a new kind of open source will emerge – Open Source 2.0 – one in which people do not contribute their software patches to a project, as they do today, but instead send their prompts that produce better versions. People might start working directly on the prompts, collaborating on ways to fine tune them. It’s open source hacking but functioning at a level above the code itself.

One possibility is that such an approach could go some way to solving the so-called “Nebraska problem”: the fact that key parts of modern digital infrastructure are underpinned up by “a project some random person in Nebraska has been thanklessly maintaining since 2003”. That person may not receive many more thanks than they have in the past, but with AI assistants constantly checking, rewriting and improving the code, at least the selfless dedication to their project becomes a little less onerous, and thus a little less likely to lead to programmer burn out.

Follow me @glynmoody on Mastodon and on Bluesky. Originally published to Walled Culture.

Filed Under: chardet, copyright, licensing, open source, relicensing

Advertisement

Source link

Continue Reading

Tech

LinkedIn secretely scans for 6,000+ Chrome extensions, collects data

Published

on

LinkedIn

A new report dubbed “BrowserGate” warns that Microsoft’s LinkedIn is using hidden JavaScript scripts on its website to scan visitors’ browsers for installed extensions and collect device data.

According to a report by Fairlinked e.V., which claims to be an association of commercial LinkedIn users, Microsoft’s platform injects JavaScript into user sessions that checks for thousands of browser extensions and links the results to identifiable user profiles.

The author claims that this behavior is used to collect sensitive personal and corporate information, as LinkedIn accounts are tied to real identities, employers, and job roles.

“LinkedIn scans for over 200 products that directly compete with its own sales tools, including Apollo, Lusha, and ZoomInfo. Because LinkedIn knows each user’s employer, it can map which companies use which competitor products. It is extracting the customer lists of thousands of software companies from their users’ browsers without anyone’s knowledge,’ the report says.

Advertisement

“Then it uses what it finds. LinkedIn has already sent enforcement threats to users of third-party tools, using data obtained through this covert scanning to identify its targets.”

BleepingComputer has independently confirmed part of these claims through our own testing, during which we observed a JavaScript file with a randomized filename being loaded by LinkedIn’s website.

This script checked for 6,236 browser extensions by attempting to access file resources associated with a specific extension ID, a known technique for detecting whether extensions are installed.

This fingerprinting script was previously reported in 2025, but it was only detecting approximately 2,000 extensions at that time. A different GitHub repository from two months ago shows 3,000 extensions being detected, demonstrating that the number of detected extensions continues to grow.

Advertisement
Snippet of the list of extensions scanned for by LinkedIn's script
Snippet of the list of extensions scanned for by LinkedIn’s script
Source: BleepingComputer

While many of the extensions that are scanned for are related to LinkedIn, the script also strangely detected language and grammar extensions, tools for tax professionals, and other seemingly unrelated features.

The script also collects a wide range of browser and device data, including CPU core count, available memory, screen resolution, timezone, language settings, battery status, audio information, and storage features.

Gathering information about visitors' devices
Gathering information about visitors’ devices
Source: BleepingComputer

BleepingComputer could not verify the claims in the BrowserGate report about the use of the data or whether it is shared with third-party companies.

However, similar fingerprinting techniques have been used in the past to build unique browser profiles, which can enable tracking users across websites.

LinkedIn denies data use allegations

LinkedIn does not dispute that it detects specific browser extensions, telling BleepingComputer that the info is used to protect the platform and its users.

However, the company claims the report is from someone whose account was banned for scraping LinkedIn content and violating the site’s terms of use.

Advertisement

“The claims made on the website linked here are plain wrong. The person behind them is subject to an account restriction for scraping and other violations of LinkedIn’s Terms of Service.

To protect the privacy of our members, their data, and to ensure site stability, we do look for extensions that scrape data without members’ consent or otherwise violate LinkedIn’s Terms of Service.

Here’s why: some extensions have static resources (images, javascript) available to inject into our webpages. We can detect the presence of these extensions by checking if that static resource URL exists. This detection is visible inside the Chrome developer console. We use this data to determine which extensions violate our terms, to inform and improve our technical defenses, and to understand why a member account might be fetching an inordinate amount of other members’ data, which at scale, impacts site stability. We do not use this data to infer sensitive information about members.

For additional context, in retaliation for this website owner’s account restriction, they attempted to obtain an injunction in Germany, alleging LinkedIn had violated various laws. The court ruled against them and found their claims against LinkedIn had no merit, and in fact, this individual’s own data practices ran afoul of the law.

Advertisement

Unfortunately, this is a case of an individual who lost in the court of law, but is seeking to re-litigate in the court of public opinion without regard for accuracy.”

❖ LinkedIn

LinkedIn claims the BrowserGate report stems from a dispute involving the developer of a LinkedIn-related browser extension called “Teamfluence,” which LinkedIn says it restricted for violating the platform’s terms.

In documents shared with BleepingComputer, a German court denied the developer’s request for a preliminary injunction, finding that LinkedIn’s actions did not constitute unlawful obstruction or discrimination.

Advertisement

The court also found that automated data collection alone could infringe upon LinkedIn’s terms of use and that it was entitled to block the accounts to protect its platform.

LinkedIn argues the BrowserGate report is an attempt to re-litigate that dispute publicly.

Regardless of the reasons for the report, one point is undisputed.

LinkedIn’s site uses a fingerprinting script that detects over 6,000 extensions running in a Chromium browser, along with other data about a visitor’s system.

Advertisement

This is not the first time that companies have used aggressive fingerprinting scripts to detect programs running on a visitor’s device.

In 2021, eBay was found to use JavaScript to perform automated port scans on visitors’ devices to determine whether they were running various remote support software.

While eBay never confirmed why they were using these scripts, it was widely believed that they were used to block fraud on compromised devices.

It was later discovered that numerous other companies were using the same fingerprinting script, including Citibank, TD Bank, Ameriprise, Chick-fil-A, Lendup, BeachBody, Equifax IQ connect, TIAA-CREF, Sky, GumTree, and WePay.

Advertisement

Automated pentesting proves the path exists. BAS proves whether your controls stop it. Most teams run one without the other.

This whitepaper maps six validation surfaces, shows where coverage ends, and provides practitioners with three diagnostic questions for any tool evaluation.

Source link

Continue Reading

Tech

Claude just shut the door on OpenClaw (unless you pay more)

Published

on

Anthropic just pulled a move that’s… let’s just say, not going to win it many fans among power users. One of the most popular ways to supercharge Claude, using OpenClaw, is effectively being paywalled out of existence. And yeah, it’s as messy as it sounds.

Anthropic just made OpenClaw way more expensive

According to Anthropic Claude Code exec Boris Cherny, Anthropic has changed how Claude subscriptions work. Starting now, your regular Claude subscription no longer covers usage through third-party tools. Instead, that usage gets kicked into a separate pay-as-you-go billing system.

Starting tomorrow at 12pm PT, Claude subscriptions will no longer cover usage on third-party tools like OpenClaw.

You can still use these tools with your Claude login via extra usage bundles (now available at a discount), or with a Claude API key.

— Boris Cherny (@bcherny) April 3, 2026

Advertisement

So what does that mean? Essentially, the hack that many users relied on, which was using Claude credits inside OpenClaw to run more advanced workflows, is basically dead. If someone still wants that setup, they’ll now have to pay extra on top of their subscription, either through usage bundles or API access.

And it’s not like OpenClaw was some niche experiment either. It blew up because it could handle real-world tasks like emails, calendars, and even flight check-ins, turning Claude into something closer to an actual assistant. But that popularity seems to have backfired, reportedly putting pressure on Anthropic’s infrastructure and forcing this clampdown.

This feels less like a tweak… and more like a crackdown

Let’s be real, this isn’t just a pricing change, it’s a pretty clear signal. Anthropic seems to be drawing a hard line: if you’re using Claude in ways they didn’t design (or monetize properly), expect that door to close quickly.

There’s also a bit of strategy baked in here. By making third-party usage more expensive, Anthropic nudges users toward its own ecosystem, like Claude Cowork. That’s great for control, but not so great if you liked mixing and matching tools to build your own workflows. To soften the blow, the company is offering a one-time credit equal to a month’s subscription and discounted bundles. But let’s be honest, that feels more like a transition cushion than a real solution.

Advertisement

Source link

Continue Reading

Tech

In Chiles V. Salazar The Supreme Court Issues A Bad Good First Amendment Decision

Published

on

from the good-bad-decision dept

The Supreme Court’s decision last year in U.S. v. Skirmetti, upholding a law depriving young trans people the healthcare they need, is insupportable, rendering people unequal in a way the Constitution cannot possibly suborn. But its new decision in Chiles v. Salazar regarding the First Amendment standard to use regarding Colorado’s law regarding conversion therapy is different. Despite its similar subject matter relating to sexual orientation and gender identity sounding similar to Skirmetti, it’s actually another 303 Creative, another case that endorsed bigoted views unacceptably hostile to LGBTQ+ people. But for much the same reason that 303 Creative was an important articulation of the First Amendment’s expansive protection—despite the apparent prejudice the plaintiff (and the Court) advanced—so is this decision.

That’s what’s good about this decision, that it recognizes that the First Amendment operates in the professional licensing space and requires heightened scrutiny before states can be permitted to constrain licensing when those constraints are predicated on viewpoints expressed by the licensee, including as part of the provision of services. Heightened scrutiny is what makes the First Amendment’s protections meaningful, and the Court has not always been consistent or coherent in requiring it, particularly with respect to licensure. But when heightened scrutiny isn’t required, it becomes much harder to fight censorial actions taken by the government, including those driven by animus, and including those driven by anti-LGBTQ+ animus—which would also include those actions targeted at therapists supporting LGBTQ+ patients, such as those recently announced by Ken Paxton in Texas. This Supreme Court decision now makes it much, much harder for him to get away with silencing those therapists whose therapy affirmed their patients’ identity by putting their license at risk if they do.

The main problem with this decision however is that the Court picked a law prohibiting conversion therapy as the moment to finally articulate that heightened scrutiny applies with respect to licensing, including medical licensing. Conversion therapy, as Justice Jackson described in her dissenting opinion, is a scientifically-discredited approach “designed to ‘convert’ a person’s sexual orientation or gender identity, so that the person will become heterosexual or cisgender.” [Dissent p.3]. Historically it has been provided via “aversive modalities,” that many have likened to torture, such as “inducing nausea, vomiting, or paralysis in patients or subjecting them to severe electric shocks to telling patients to snap an elastic band on their wrists in response to nonconforming thoughts.” [Dissent p.3]

Importantly, however, to the extent that any law prohibits these practices, those laws remain in force—this decision does not affect such laws. (“The question before us is a narrow one. Ms. Chiles does not question that Colorado’s law banning conversion therapy has some constitutionally sound applications. She does not take issue with the State’s effort to prohibit what she herself calls ‘long-abandoned, aversive’ physical interventions.” [Majority p.7]). But it does reach conversion therapy delivered via talk therapy, where therapists “seek to encourage patients to change their behavior in an attempt to ‘change’ their identity” still are. [Dissent p.3]. As Jackson explained, this approach also causes real harm. [Dissent p.4-5]. And it’s a kind of harm that states like Colorado, who passed the law challenged here, have an interest in stopping. [Dissent p.5-7].

Advertisement

Making it hard for states to do so raises a number of concerns, such as that the decision will give a veneer of legitimacy to conversion therapy and stoke the hostile anti-LGBTQ+ attitudes driving it, as well as create the risk that conversion therapy, at least insofar as it includes talk therapy, might be something that minors could be legally subjected to in Colorado and elsewhere. There is also the fear that even if the Court has now articulated a good rule about heightened scrutiny it will only remember to apply it in cases like these where it will lead to results consistent with the Court majority’s biases—in other words, while the Court may be happy to subject Colorado’s anti-conversion therapy rule to strict scrutiny, there is the fear that it will conveniently forget to apply it to, say, Texas’s law trying to punish those who refuse to engage in it.

It also raises a collateral concern even on the speech-protection front, that subjecting licensure requirements to strict scrutiny could have the practical effect of diluting the standard. As Jackson also noted, we have long allowed states to regulate medical professionals, [Dissent p.8], as well as other licensed professionals like lawyers, and much of the regulation is directed to how licensed practitioners speak in some way as they provide their services. Perhaps all these efforts could actually pass strict scrutiny. In fact, it’s even still possible that Colorado’s law might yet survive it; although Justice Gorsuch’s majority opinion casts some doubt, the case is not over.

Rather than deciding it for themselves, the Court remanded the case back to the lower courts to this time apply the more exacting strict scrutiny standard rather than the less-demanding rational basis review they originally applied. Presumably there will be more opportunity for briefing and argument to show how the particular harm of conversion therapy creates the compelling state interest Colorado needed to act, and that its prohibition of licensed therapists from providing it via talk therapy is a remedy that is sufficiently narrowly tailored.

But the problem with applying strict scrutiny to so much regulation targeting licensing is that it might start to become too easy to satisfy when there are strong policy reasons to favor the government action, and as a result strict scrutiny will no longer be useful as a standard if it essentially allows everything, instead of being a meaningful filter. There are after all always compelling reasons for the government to care about the quality of the services licensees deliver via their professional expression, but just because the government has a valid reason to regulate does not mean that everything it does to regulate is constitutional.

Advertisement

Strict scrutiny also requires that the state action be narrowly tailored, in addition to being motivated by a compelling reason, and it’s too easy for courts to skip that part of the analysis, as we saw with the TikTok ban when it was somehow blessed by the DC Circuit. And the fear is that the more strict scrutiny is applied to what is fairly ordinary state regulation—of licensed practitioners—the more likely it will have the practical effect of creating precedent that dilutes the standard so that it is no longer so strict when we need it to be, especially for state action that is more exceptional. (On the TikTok ban the Supreme Court had greenlighted it using a lesser standard, which was itself extremely problematic as the ban should have been found unconstitutional, but at least the tool that should have applied to it remained sharp for future use, rather than dulled by this bad decision.)

On the other hand, a decision upholding the lower courts’ use of rational basis review would have done no one any favors. As Justice Kagan wrote in her concurrence, joined by Justice Sotomayor, it is easy to imagine a law that mirrors what the Colorado one does, prohibiting talk therapy that accepts LGBTQ+ identity instead of challenges it, and now advocates are left with a much more powerful tool to challenge it.

Of course, it does not matter what the State’s preferred side is. Consider a hypothetical law that is the mirror image of Colorado’s. Instead of barring talk therapy designed to change a minor’s sexual orientation or gender identity, this law bars therapy affirming those things. As Ms. Chiles readily acknowledges, the First Amendment would apply in the identical way. [Concurrence p.3]

As Texas shows, such a situation is not hypothetical. But now with this decision people challenging such censorial government efforts can turn to long-established First Amendment doctrine in their fight. And the doctrine remains stable, rather than something now swiss-cheesed with bespoke exceptions tied to certain policy preferences. No matter how valid those preferences, if they can be given special constitutional treatment then so can the bad ones. This decision helps buttress the guardrails preventing speech from being protected or not based on whether the government likes it, which is the whole reason we have the First Amendment, to make sure government preferences cannot dictate what views people can express.

Which is especially important when the courts cannot be trusted to overcome their biases to have good sense about which policy preferences are good and bad. The Supreme Court of course only has itself to blame that the public is so primed to believe that its decisions are driven by its biases and not neutral, sustainable doctrine. But nevertheless this decision still stands as an important declaration of law that is consistent with existing First Amendment jurisprudence and one that will ultimately leave everyone, including those challenging government actions attacking LGBTQ+ interests, far better off than if the Court had let the lower courts’ decisions invalidating the law stand after using a less speech-protective rule. In fact it will be an important one for anyone fighting censorship in any context, including those we generally talk about here, to use, because with this decision, the rule that has long been the rule remains the rule: when a government action non-incidentally touches on speech, is content-based, and is not viewpoint neutral, strict scrutiny applies.

Advertisement

Per this decision, a law targeting what therapists can say inherently involves speech, and not in an incidental way. And it targets it in a way that is not viewpoint-neutral; it has a specific preference, that conversion therapy is bad. As a result, as a law that targets the content of speech in a way that is not viewpoint-neutral, strict scrutiny, a more exacting standard than the rational basis review the lower courts had used, is required.

Turning to the merits, both the district court and the Tenth Circuit denied Ms. Chiles’s request for a preliminary injunction. The courts recognized that Ms. Chiles provides only “talk therapy.” And they acknowledged that Colorado’s law regulates the “verbal language” she may use. But, the courts held, the main thrust of the State’s law is to delineate which “treatments” and “therapeutic modalit[ies]” are permissible. Accordingly, the courts reasoned that Colorado’s law is best understood as regulating “professional conduct.” At most, they continued, Colorado’s law regulates speech only “incidentally” to professional conduct. As a result, the courts concluded, Colorado’s law triggers no more than “rational basis review” under the First Amendment, requiring the State to show merely that its law is rationally related to a legitimate governmental interest. Because the State satisfied that standard, the courts held that Ms. Chiles was not entitled to the relief she sought. [Majority p.6]

[…]

Consistent with the First Amendment’s jealous protections for the individual’s right to think and speak freely, this Court has long held that laws regulating speech based on its subject matter or “communicative content” are “presumptively unconstitutional.” Reed v. Town of Gilbert, 576 U. S. 155, 163 (2015). As a general rule, such “content-based” restrictions trigger “strict scrutiny,” a demanding standard that requires the government to prove its restriction on speech is “narrowly tailored to serve compelling state interests.” Ibid. Under that test, it is ” ‘rare that a regulation . . . will ever be permissible.’ ” Brown v. Entertainment Merchants Assn., 564 U. S. 786, 799 (2011) (quoting United States v. Playboy Entertainment Group, Inc., 529 U. S. 803, 818 (2000)).

We have recognized, as well, the even greater dangers associated with regulations that discriminate based on the speaker’s point of view. When the government seeks not just to restrict speech based on its subject matter, but also seeks to dictate what particular “opinion or perspective” individuals may express on that subject, “the violation of the First Amendment is all the more blatant.” Rosenberger v. Rector and Visitors of Univ. of Va., 515 U. S. 819, 829 (1995). “Viewpoint discrimination,” as we have put it, represents “an egregious form” of content regulation, and governments in this country must nearly always “abstain” from it. Ibid.; see also Iancu v. Brunetti, 588 U. S. 388, 393 (2019) (describing “the bedrock First Amendment principle that the government cannot discriminate” based on view-point (internal quotation marks omitted)); Good News Club v. Milford Central School, 533 U. S. 98, 112–113 (2001); Barnette, 319 U. S., at 642. [Majority p.8-9]

[…]

As applied here, Colorado’s law does not just regulate the content of Ms. Chiles’s speech. It goes a step further, prescribing what views she may and may not express. For a gay client, Ms. Chiles may express “[a]cceptance, support, and understanding for the facilitation of . . . identity exploration.” For a client “undergoing gender transition,” Ms. Chiles may likewise offer words of “[a]ssistance.” But if a gay or transgender client seeks her counsel in the hope of changing his sexual orientation or gender identity, Ms. Chiles cannot provide it. The law forbids her from saying anything that “attempts . . . to change” a client’s “sexual orientation or gender identity,” including anything that might represent an “effor[t] to change [her client’s] behaviors or gender expressions or . . . romantic attraction[s].” [Majority p.13]

But even if the law as it stands can’t survive strict scrutiny, in her concurrence, joined by Justice Sotomayor, Justice Kagan suggested ways the law might be amended so that it could be upheld.

It would, however, be less [likely to be unconstitutional] if the law under review was content based but viewpoint neutral. Such content-based laws, as the Court explains, trigger strict scrutiny “[a]s a general rule.” But our precedents respecting those laws recognize complexity and nuance. We apply our most demanding standard when there is any “realistic possibility that official suppression of ideas is afoot”—when, that is, a (merely) content-based law may reasonably be thought to pose the dangers that viewpoint-based laws always do. Davenport v. Washington Ed. Assn., 551 U. S. 177, 189 (2007). But when that is not the case—when a law, though based on content, raises no real concern that the government is censoring disfavored ideas—then we have not infrequently “relax[ed] our guard.” Reed, 576 U. S., at 183 (opinion of KAGAN, J.); see Davenport, 551 U. S., at 188 (noting the “numerous situations in which [the] risk” of a content-based law “driv[ing] certain ideas or viewpoints from the marketplace” is “attenuated” or “inconsequential, so that strict scrutiny is unwarranted”). Just two Terms ago, for example, the Court declined to apply strict scrutiny to a content-based but viewpoint-neutral trademark restriction. See Vidal v. Elster, 602 U. S. 286, 295 (2024); id., at 312 (BARRETT, J., concurring in part); id., at 329–330 (SOTOMAYOR, J., concurring in judgment). In the trademark context, as in some others, experience and reason alike showed “no significant danger of idea or viewpoint” bias. R. A. V., 505 U. S., at 388.

The same may well be true of content-based but viewpoint-neutral laws regulating speech in doctors’ and counselors’ offices.* Medical care typically involves speech, so the regulation of medical care (which is, of course, pervasive) may involve speech restrictions. And those restrictions will generally refer to the speech’s content. Cf. Reed, 576 U. S., at 177 (Breyer, J., concurring in judgment) (noting that “[r]egulatory programs” addressing speech “inevitably involve content discrimination”). But laws of that kind may not pose the risk of censorship—of “official suppression of ideas”—that appropriately triggers our most rigorous review. R. A. V., 505 U. S., at 390. And that means the “difference between viewpoint-based and viewpoint-neutral content discrimination” in the health-care context could prove “decisive.” Vidal, 602 U. S., at 330 (opinion of SOTOMAYOR, J.). Fuller consideration of that question, though, can wait for another day. We need not here decide how to assess viewpoint-neutral laws regulating health providers’ expression because, as the Court holds, Colorado’s is not one. [Concurrence p.3-4]

Ultimately, despite all of the concerns, the decision is still a good one that will leave everyone better off. And not just for cases that reach the Supreme Court but in every state and federal court hearing every challenge of laws trying to penalize certain views, including those accepting of LGBTQ+ identities. Whereas a decision to the contrary, one that would have allowed a rational basis standard to be the test for the law’s constitutionality, could be used to defend laws that, instead of fighting LGBTQ+ prejudice as this one tried to do, instead advanced it. As Texas illustrates, already there are examples of certain government actors attempting to impose their biased viewpoints via licensing requirements for therapists. This decision, even if it may stand as an individual reflection of LGBTQ+ animus by this Supreme Court, still makes further state action motivated by it that much harder for any government actor to impose.

Advertisement

Filed Under: 1st amendment, chiles v. salazar, colorado, conversion therapy, free speech, strict scrutiny, supreme court

Source link

Advertisement
Continue Reading

Tech

As a Tool of Productivity, AI Can Make the Effort to Learn More Meaningful

Published

on

I want to share a story of struggle. Actually, two kinds of struggle.

My father completed his doctorate at the University of Utah in the early 1970s. For his dissertation, he ran a statistical analysis on genealogical records to determine the impact of certain economic conditions on family size.

He accomplished this on one of the most advanced computers of the time. His method? Literally punching out little rectangles in dozens of stiff paper cards, and feeding the stack into the computer.

My father was a lowly graduate student, and because the demand for computing time at the university was sky high, he had to run his analysis in the middle of the night. He spent many nights punching cards and running them through the machine. Even a single mispunch would cause the entire program to stop running and require painstaking troubleshooting, re-punching, and another night at the computer lab.

Advertisement

Unproductive vs. Productive Struggle

The soul-sapping sleep deprivation and endless paper punching that stood between my father and his goals represents the first kind of struggle in my story: unproductive struggle — the challenging, unavoidable tasks we must perform toward a learning goal, but which add no value to the intellectual outcome.

The real intellectual challenge in my father’s work was in deciding which variables belonged in the model, determining how to represent economic conditions over time, and interpreting the data. This is the second kind of struggle: productive struggle. That is, the effort a learner expends to make sense of concepts, to figure something out that is not immediately apparent. This struggle leads to growth and insight. It builds judgment, expertise and understanding.

What is frustrating about my father’s story in hindsight is that so much of his time and cognitive energy were consumed by the unproductive struggle of punching cards and managing the computer. Without those barriers, he would have had more capacity for the productive struggle that leads to meaningful learning.

Thinking About What Matters

When it comes to AI in schools, some educators fear that it will lead to learning becoming too easy. This is referred to as “cognitive laziness.” The assumption is that we will offload our thinking to AI and eventually lose our ability to think critically. This is a risk with any technology that makes our mental work more efficient, and AI is uniquely adept at taking on cognitively demanding tasks. But ceding our reasoning power to AI isn’t a foregone conclusion. And simply not using AI in learning settings doesn’t have to be our solution for preserving our mental capacities.

Advertisement

Just as better computing tools would have freed my father from punching cards without removing the intellectual rigor of his work, today’s tools, including AI, have the potential to offload unproductive struggle, while preserving, and even amplifying, the productive struggle that is central to learning.

Here’s an example: When reading comprehension is not the goal of a lesson but a necessary prerequisite — a student having to read an article to understand the causes of the French Revolution, for example — AI tools can adjust reading levels on the fly to assist learners who are below grade level or for whom English is not their first language. This allows them to focus on the history rather than on decoding the text.

Refining Rigor

So what does this mean for educators who are grappling with how to help students use AI effectively?

First, we need to remind ourselves and help our students understand that the goal of learning has never been to make learning easy. It is to make it meaningful. We must ensure that learners are spending their time wrestling with big ideas, not battling logistics or bogged down by rote tasks.

Advertisement

Second, educators need to face a hard truth about the assignments we give students. Many assignments contain a mix of productive and unproductive struggle, and we are not always very intentional about which is which. Under crushing time and resource pressure, we can become unreflective about the distinction between productive and unproductive work. We inherit assignments, reuse problem sets, and value rigor without always asking where the rigor actually lies.

If AI forces us to confront that, it may be one of the most useful disruptions education has experienced in decades.

For instance, requiring students to write citations according to a set format may feel rigorous, but the cognitive work of formatting has little to do with the intellectual work of evaluating sources and integrating evidence into an argument. This shift requires us to redesign tasks, rethink assessments and, if necessary, let go of practices that feel rigorous but don’t meaningfully deepen understanding.

Sharpening Learning

If we do this well, AI won’t hollow out learning; it will sharpen it. It will give students more space to wrestle with ideas instead of mechanics, more time to interpret instead of transcribe, and more opportunity to make active sense of the world. It will give us a chance to be far more intentional about the kind of struggle we ask students to engage in.

Advertisement

In the end, AI won’t decide whether our students experience cognitive laziness or cognitive growth. We will decide that by how we design assignments and assessments, and by the choices we make about which AI tools to adopt and how we choose to use them.

This is our chance to weed out the punch cards and open up more time for students to struggle over things that truly matter.

Source link

Advertisement
Continue Reading

Tech

Microsoft launches three in-house AI models in direct challenge to OpenAI

Published

on

Six months after renegotiating the contract that once barred it from independently pursuing frontier AI, Microsoft has released three in-house models that directly challenge the partner it spent $13 billion cultivating. MAI-Transcribe-1, MAI-Voice-1, and MAI-Image-2 are now available in Microsoft Foundry, and they do not carry OpenAI’s name anywhere on the label.

The models are the first publicly released output of the MAI Superintelligence team that Mustafa Suleyman, CEO of Microsoft AI, formed in November 2025 with a stated mission of pursuing what the company calls “humanist superintelligence.” In a March internal memo first reported by Business Insider, Suleyman wrote that he intended to focus all of his energy on superintelligence and deliver world-class models for Microsoft over the next five years. That ambition now has its first tangible evidence.

MAI-Transcribe-1 is, on paper, the most immediately disruptive of the three. The speech-to-text model claims the lowest word error rate across 25 languages on the FLEURS benchmark, averaging 3.8 per cent, and Microsoft says it outperforms OpenAI’s Whisper-large-v3 on all 25 languages, Google’s Gemini 3.1 Flash on 22 of 25, and ElevenLabs’ Scribe v2 on 15 of 25. It runs 2.5 times faster than Microsoft’s previous Azure Fast transcription service and is priced at $0.36 per hour of audio. Perhaps most revealing is the team that built it: just 10 people.

MAI-Voice-1 completes the audio loop. The text-to-speech model generates 60 seconds of natural-sounding audio in under one second on a single GPU and supports custom voice creation from a few seconds of sample audio. Combined with MAI-Transcribe-1 and a large language model of the customer’s choosing, it forms a complete voice pipeline that runs entirely on Microsoft infrastructure without any dependency on OpenAI’s technology.

Advertisement

MAI-Image-2, the oldest of the three, had already debuted at number three on the Arena.ai text-to-image leaderboard in March, placing it behind only Google’s Gemini 3.1 Flash and OpenAI’s GPT Image 1.5. The model was developed in collaboration with photographers, designers, and visual storytellers, and WPP, one of the world’s largest marketing groups, is among the first enterprise partners building with it at scale.

Advertisement

The strategic context matters more than the benchmarks. Until the September 2025 renegotiation, Microsoft’s original partnership agreement with OpenAI contractually prevented the company from independently pursuing general AI development. The revised memorandum of understanding changed that calculus fundamentally. Microsoft retained licensing rights to everything OpenAI builds through 2032, gained $250 billion in new Azure cloud business commitments, and crucially won the freedom to build competing models. Suleyman acknowledged the pivot directly: the contract renegotiation, he said, enabled Microsoft to independently pursue its own superintelligence.

The timing is deliberate. Jacob Andreou, formerly a senior vice-president at Snap, took over as executive vice-president of Copilot on 17 March, freeing Suleyman from day-to-day product responsibilities. The MAI models landed barely two weeks later. Microsoft also hired Ali Farhadi, the former chief executive of the Allen Institute for AI, for Suleyman’s superintelligence team in March, a recruitment signal that the ambitions extend well beyond transcription and image generation.

For OpenAI, the development creates an awkward dynamic. Microsoft remains its single largest investor and its primary cloud infrastructure provider, and the two companies continue to share a platform in Foundry, which hosts both OpenAI and Microsoft models. But OpenAI’s own push into commercial monetisation is accelerating in parallel, and the relationship is beginning to resemble two companies orbiting the same market with overlapping products rather than a partnership with a clear division of labour. OpenAI’s $110 billion raise in February, backed by SoftBank, Nvidia, and Amazon, valued the company independently of Microsoft at a level that makes the original partnership framing increasingly anachronistic.

The broader AI model market is fragmenting along similar lines. Anthropic’s $30 billion raise at a $380 billion valuation established it as a credible third force in enterprise AI, with run-rate revenue of $14 billion. Google continues to iterate rapidly on Gemini. The era in which OpenAI was the only game in town for frontier AI capabilities, and Microsoft was content to be its exclusive distribution channel, is definitively over.

Advertisement

Microsoft Foundry, the platform formerly known as Azure AI Foundry and before that Azure AI Studio (the second rebrand in twelve months), now serves developers at more than 80,000 enterprises including 80 per cent of Fortune 500 companies. That distribution advantage is what makes the MAI model family strategically significant: Microsoft does not need to beat OpenAI on every benchmark to shift enterprise spending toward in-house models. It needs to be competitive enough that customers choose the integrated option over the third-party alternative, a dynamic that the past year of AI industry consolidation has made increasingly plausible.

Suleyman has said it will take another year or two before the superintelligence team produces frontier-class language models. What landed this week is the foundation: a multimodal toolkit that gives Microsoft its own voice, ears, and eyes independent of OpenAI. The $13 billion partnership is not ending. But the premise on which it was built, that Microsoft needed OpenAI to compete in AI, is being quietly dismantled one model release at a time.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025