Revel is heading into AXPONA 2026 (April 10-12) with a clear focus: the debut of its new Performa4 speaker series, expected to fall between $2,000 and $7,000 per pair. Under the HARMAN Luxury Audio Group umbrella, which also includes Arcam, JBL Synthesis, Lexicon, and Mark Levinson, Revel isn’t going after statement pricing here. Performa4 is aimed at the part of the market where most serious systems are actually built, and where the competition is crowded, well established, and not particularly forgiving.
Revel Performa4 Speaker Series
Revel’s Performa4 Series is a new loudspeaker line built on the company’s established approach to acoustic design and measurement. It reflects three decades of engineering focused on controlled performance and consistent results in real-world listening environments.
Revel’s Performa4 Series consists of two floorstanding models (F346 and F345), two bookshelf speakers (M146 and M145), a center channel (C245), and a powered subwoofer (B140). With multiple configurations available, the Performa4 lineup can be used in both two-channel music systems and multichannel home theater setups.
“At Revel, science is at the heart of everything we do. The Performa4 series represents the culmination of thousands of hours of research, development, and real-world testing. With our new 7th-generation Acoustic Lens waveguide and advanced DCC and MCC transducers, we’ve raised the bar for what’s possible in this class,” said Jim Garrett, Senior Director of Product Strategy and Planning at HARMAN Luxury Audio.
Advertisement
Acoustic Lens Waveguide, DCC and MCC Transducers Explained
The Performa4 series uses Revel’s Deep Ceramic Composite (DCC) and Micro Ceramic Composite (MCC) drivers, developed to improve stiffness while keeping mass low and reducing unwanted coloration.
Woofer and midrange drivers are built on cast aluminum frames designed with Finite Element Analysis to optimize airflow, control resonance, and maintain structural stability. Each driver also uses an inverted surround and integrated trim ring, which simplifies the front baffle and keeps the layout clean.
Revel’s 7th-generation Acoustic Lens waveguide is paired with a 1-inch DCC dome tweeter to improve integration with the midrange driver. The waveguide is designed to control dispersion more consistently across the listening area, while also supporting higher efficiency and lower distortion, including at off-axis positions.
Revel Performa4 Industrial Design
The Performa4 series adopts a clean, modern design that builds on Revel’s established cabinet approach without adding unnecessary complexity.
Advertisement
All models use magnetically attached grilles for a flush, hardware-free front panel, along with black accent detailing that keeps the visual profile consistent across the range. Cabinets are internally cross-braced to improve rigidity and reduce unwanted vibration.
Curved side panels are finished in real wood veneers, available in Natural Walnut and Black Walnut, offering a straightforward aesthetic that fits into both traditional and contemporary spaces.
Revel Performa4 Models
F346
The F346 is a 3-way floorstanding loudspeaker and the top model in the Performa4 series.
Advertisement. Scroll to continue reading.
It uses three 6.5-inch (165mm) MCC woofers, a 6.5-inch (165mm) DCC midrange driver, and a 1-inch (25mm) DCC dome tweeter paired with Revel’s 7th-generation Acoustic Lens waveguide.
Advertisement
Dual rear-firing ports provide low-frequency extension, while dual 5-way binding posts support bi-amping or bi-wiring. The cabinet is fitted with solid aluminum feet and includes optional floor spikes for added stability.
The F346 is rated for a frequency response of 30Hz to 40kHz (±6dB), with 88dB sensitivity and a nominal impedance of 6 ohms. Recommended amplifier power ranges from 20 to 250 watts.
F345
The F345 is a 3-way floorstanding loudspeaker that shares the same core design as the F346, using smaller drivers in a more compact cabinet.
It features three 5.25-inch (130mm) MCC woofers, a 5.25-inch (130mm) DCC midrange driver, and a 1-inch (25mm) DCC dome tweeter paired with Revel’s 7th-generation Acoustic Lens waveguide.
Dual rear-firing ports support low-frequency output, while dual 5-way binding posts allow for bi-amping or bi-wiring. The cabinet includes solid aluminum feet with optional spikes for placement flexibility.
Advertisement
The F345 is rated for a frequency response of 36Hz to 40kHz (±6dB), with 87dB sensitivity and a nominal impedance of 6 ohms. Recommended amplifier power ranges from 30 to 225 watts.
M146
The M146 is a 2-way bookshelf or standmount speaker positioned in the middle of the Performa4 lineup.
It uses a 6.5-inch (165mm) MCC woofer paired with a 1-inch (25mm) DCC dome tweeter and Revel’s 7th-generation Acoustic Lens waveguide.
The crossover network incorporates air-core inductors, and dual 5-way binding posts support bi-amping or bi-wiring. Optional MFS4 floor stands are available for proper placement and listening height.
The M146 is rated for a frequency response of 43Hz to 40kHz (±6dB), with 86dB sensitivity and a nominal impedance of 6 ohms. Recommended amplifier power ranges from 15 to 200 watts.
Advertisement
M145
The M145 is a compact 2-way bookshelf speaker and the smaller option in the Performa4 lineup.
Advertisement. Scroll to continue reading.
It features a 5.25-inch (130mm) MCC woofer paired with a 1-inch (25mm) DCC dome tweeter and Revel’s 7th-generation Acoustic Lens waveguide.
Like the M146, it includes 5-way binding posts for flexible connectivity and is compatible with the optional MFS4 floor stands for proper positioning.
The M145 is rated for a frequency response of 54Hz to 40kHz (±6dB), with 85dB sensitivity and a nominal impedance of 6 ohms. Recommended amplifier power ranges from 15 to 150 watts.
Advertisement
C245
The C245 is a dedicated center channel speaker designed for use in multichannel systems with other Performa4 models.
It features dual 5.25-inch (130mm) MCC woofers flanking a 1-inch (25mm) DCC dome tweeter, paired with Revel’s 7th-generation Acoustic Lens waveguide for consistent dispersion across the front soundstage.
The C245 is rated for a frequency response of 55Hz to 40kHz (±6dB), with 86dB sensitivity and a nominal impedance of 6 ohms. Recommended amplifier power ranges from 15 to 200 watts.
B140
The B140 is a powered subwoofer designed to integrate with the Performa4 series in both two-channel and home theater systems.
It uses a 10-inch (250mm) fiber-composite woofer driven by a 750-watt RMS Class D amplifier, with up to 1,500 watts peak output. The design targets low-frequency extension down to 26Hz.
Rear-panel controls include a variable low-pass filter (50–150Hz), LFE input, phase adjustment, volume control, and auto on/off functionality. A rear-ported enclosure using Revel’s Constant Pressure Gradient design is intended to reduce turbulence and maintain cleaner low-frequency output.
Advertisement
MFS4 Floorstands
Revel also offers the optional MFS4 floor stands for the M146 and M145 bookshelf speakers.
Constructed from extruded aluminum and steel, the MFS4 stands are designed to position each speaker at an appropriate listening height. They include built-in cable management and optional spikes for use on carpeted surfaces. The stands are sold in pairs.
Revel Performa 4 Comparisons
Revel Model
F346
F345
M146
M145
C245
Speaker Type
Floorstanding
Floorstanding
Bookshelf
Bookshelf
Center
Price
$6,999/pair
$4,999/pair
$2,999/pair
$1,999/pair
$1,499/each
Speaker Configuration
3-way
3-way
2-way
2-way
2-way
Tweeter
1-inch (25mm) DCC Dome Tweeter with Acoustic Lens and 7th-Generation Waveguide
1-inch (25mm) DCC Dome Tweeter with Acoustic Lens and Waveguide
1-inch (25mm) DCC Dome Tweeter with Acoustic Lens and Waveguide
1-inch (25mm) DCC Dome Tweeter with Acoustic Lens and Waveguide
1-inch (25mm) DCC Dome Tweeter with Acoustic Lens and Waveguide
Midrange
1 x 6.5 in (165 mm) Deep Ceramic Composite (DCC) Cone Driver
1 x 5.25 in (135 mm) Deep Ceramic Composite (DCC) Cone Driver
N/A
N/A
N/A
Woofer
3 x 6.5 in (165 mm) Micro Ceramic Composite (MCC) Cone Woofers
3 x 5.25 in (135 mm) Micro Ceramic Composite (MCC) Cone Woofers
250mm (10-inch) Coated Fibre Composite Cone Driver in a cast-Aluminum frame
Amplifier Type
Class D amplifier
Power Output
750W RMS (1500W peak)
Enclosure Tuning
Bass-Reflex via Rear-Mounted Port
Controls
Auto Power, Crossover, Level, Phase
Inputs
RCA LFE/Line Level, 3.5mm, 12V Trigger
Frequency Response +/-6 dB
26Hz – 150Hz
Crossover Frequency (Variable)
50Hz – 150Hz
Dimensions
14.8 x 16.9 x 17.2 inches (376.2 x 429.28 x 436.37 mm)
Weight
61.3 lbs / 27.8 kg
The Bottom Line
Revel’s Performa4 series is a calculated move into one of the most competitive segments in loudspeakers. The combination of DCC and MCC driver materials, the 7th-generation Acoustic Lens waveguide, and consistent cabinet engineering across the range points to a focus on controlled dispersion, tonal consistency, and predictable in-room performance; areas where Revel has historically been very disciplined.
Advertisement
Advertisement. Scroll to continue reading.
The lineup is clearly built for flexibility. It can anchor a straightforward two-channel system or scale into a full home theater without mixing and matching across different voicings. That matters for buyers who want system coherence without overthinking every component swap.
What’s less clear is how much separation there is between models beyond size and output, and whether the subwoofer’s pricing will make sense for buyers building out a full system. There’s also no indication of built-in room correction or system-level integration features, which are becoming more common even in passive speaker ecosystems when paired with modern electronics.
This is for listeners who want a complete, measurement-driven speaker system in the $2,000 to $7,000 range without stepping into five-figure territory. Not entry-level, not cost-no-object—right in the middle where most serious systems live.
Advertisement
At AXPONA 2026, the F346 will be demonstrated with an Arcam SA45 integrated streaming amplifier and CD5 CD player, which should give a clear sense of how the top model performs with both streaming and physical sources.
A new NYT Connections puzzle appears at midnight each day for your time zone – which means that some people are always playing ‘today’s game’ while others are playing ‘yesterday’s’. If you’re looking for Friday’s puzzle instead then click here: NYT Connections hints and answers for Friday, April 3 (game #1027).
Good morning! Let’s play Connections, the NYT’s clever word game that challenges you to group answers in various categories. It can be tough, so read on if you need Connections hints.
What should you do once you’ve finished? Why, play some more word games of course. I’ve also got daily Strands hints and answers and Quordle hints and answers articles if you need help for those too, while Marc’s Wordle today page covers the original viral word game.
Advertisement
SPOILER WARNING: Information about NYT Connections today is below, so don’t read on if you don’t want to know the answers.
Article continues below
NYT Connections today (game #1028) – today’s words
(Image credit: New York Times)
Today’s NYT Connections words are…
BAND
MASK
САРЕ
LIE
BLUFF
BOOT
SPIT
SHIELD
LET
POINT
BASE
COVER
SLEEPING
SCREEN
SUMMER
DOGS
NYT Connections today (game #1028) – hint #1 – group hints
What are some clues for today’s NYT Connections groups?
YELLOW: Say what you see
GREEN: Slightly hidden
BLUE: Seaside geography
PURPLE: Add a word that rhymes with “lamp”
Need more clues?
We’re firmly in spoiler territory now, but read on if you want to know what the four theme answers are for today’s NYT Connections puzzles…
Advertisement
Sign up for breaking news, reviews, opinion, top tech deals, and more.
NYT Connections today (game #1028) – hint #2 – group answers
What are the answers for today’s NYT Connections groups?
YELLOW: “LET SLEEPING DOGS LIE”
GREEN: OBSCURE
BLUE: COASTAL LANDFORMS
PURPLE: ____ CAMP
Right, the answers are below, so DO NOT SCROLL ANY FURTHER IF YOU DON’T WANT TO SEE THEM.
Advertisement
NYT Connections today (game #1028) – the answers
(Image credit: New York Times)
The answers to today’s Connections, game #1028, are…
YELLOW: “LET SLEEPING DOGS LIE” DOGS, LET, LIE, SLEEPING
GREEN: OBSCURE COVER, MASK, SCREEN, SHIELD
BLUE: COASTAL LANDFORMS BLUFF, CAPE, POINT, SPIT
PURPLE: ____ CAMP BAND, BASE, BOOT, SUMMER
My rating: Easy
My score: Perfect
Am I the only person who really dislikes it when the four tiles form the name of the category like today’s yellow group “LET SLEEPING DOGS LIE” with its tiles LET, SLEEPING, DOGS, and LIE? My reason is that often Connections will tease us with groups just like this, that seem obvious, but when you submit them they are just a trick to fool us into a mistake.
Apologies, I shall wind my neck in now.
Advertisement
Irritations aside this was a relatively straightforward game — probably down to that one big red herring that wasn’t actually a red herring, making the deployment of all other red herrings moot.
Yesterday’s NYT Connections answers (Friday, April 3, game #1027)
PURPLE: ____ CONTROL CRUISE, DAMAGE, GROUND, MISSION
What is NYT Connections?
NYT Connections is one of several increasingly popular word games made by the New York Times. It challenges you to find groups of four items that share something in common, and each group has a different difficulty level: green is easy, yellow a little harder, blue often quite tough and purple usually very difficult.
On the plus side, you don’t technically need to solve the final one, as you’ll be able to answer that one by a process of elimination. What’s more, you can make up to four mistakes, which gives you a little bit of breathing room.
Advertisement
It’s a little more involved than something like Wordle, however, and there are plenty of opportunities for the game to trip you up with tricks. For instance, watch out for homophones and other word games that could disguise the answers.
It’s playable for free via the NYT Games site on desktop or mobile.
Just when things had started to feel quiet on the PlayStation front, a fresh wave of leaks has stirred the pot again. There’s chatter around the PlayStation 6, a next-gen handheld, and even some behind-the-scenes changes that hint at how Sony is preparing for what’s next.
Sony
None of this is official, of course, but even if part of it holds up, Sony isn’t just building new hardware, but also laying the groundwork for how that hardware will actually work.
What do the latest PlayStation leaks actually say?
According to trusted leaker Moore’s Law is Dead, the biggest headline is around the PlayStation 6, which may not be as far away as expected. Early details suggest that Sony is already deep into development, with timelines hinting at a launch window that’s closer than the typical console cycle would suggest.
Moore’s Law is Dead
But that’s only part of the story. Alongside the PS6 chatter, there’s renewed talk of a dedicated PlayStation handheld. Unlike the PlayStation Portal, which is more of a remote-play device, this new handheld is rumored to be a standalone system capable of running games natively. Think of it as the new PSP or PS Vita.
Aleks Dorohovich / Unsplash
Another interesting detail is around “PlayGo,” which has reportedly been introduced in the latest PS5 SDK. Think of it as Sony’s version of Xbox’s Smart Delivery. It allows developers to break games into smaller chunks, so each device only downloads the assets it actually needs. That means a standard PS5 wouldn’t need to download higher-resolution textures meant for a PS5 Pro, and potentially, future devices could follow the same logic.
PS6 pricing leaks sound surprisingly… reasonable
According to MLID, now might not be the best time to drop $900 on a PS5 Pro. The claim is pretty bold, but they suggest skipping the current-gen upgrade and waiting, because the base PlayStation 6 could actually end up being cheaper than the PS5 Pro. The reasoning? Sony is reportedly designing the PS6 from the ground up to be more cost-efficient, with cheaper cooling, power delivery, and overall manufacturing.
Advertisement
In fact, some estimates even suggest a bill of materials around $750, which could keep the final price comfortably below $1,000. That’s actually quite cheaper, compared to Microsoft’s upcoming Project Helix, which could go up to $1200. Then again, these are still early leaks and far from official, so it’s worth taking all of this with a pinch of salt for now.
Microsoft plans to invest $10 billion in Japan from 2026 to 2029 to expand AI infrastructure, boost local cloud capacity, train 1 million engineers and developers, and deepen cybersecurity cooperation with the Japanese government. Reuters reports: The investment includes the training of 1 million engineers and developers by 2030, Microsoft said, which was unveiled during a visit to Tokyo by Vice Chair and President Brad Smith. In a statement, the company said the plan aligns with Prime Minister Sanae Takaichi’s goal to boost growth through advanced, strategic technologies while safeguarding national security.
Microsoft will work with domestic firms including SoftBank and Sakura Internet to expand Japan-based AI computing capacity, allowing Ecompanies and government agencies to keep sensitive data within the country while accessing Microsoft Azure services, it said. It will also deepen cooperation with Japanese authorities on sharing intelligence related to cyber threats and crime prevention.
Most of the discussions about the impact of the latest generative AI systems on copyright have centered on text, images and video. That’s no surprise, since writers, artists and film-makers feel very strongly about their creations, and members of the public can relate easily to the issues that AI raises for this kind of creativity. But there’s another creative domain that has been massively affected by genAI: software engineering. More and more professional coders are using generative AI to write major elements of their projects for them. Some top engineers even claim that they have stopped coding completely, and now act more as a manager for the AI generation of code, because the available tools are now so powerful. This applies in the world of open source software too. But a recent incident shows that it raises some interesting copyright issues there that are likely to affect the entire software world.
It concerns a project called chardet, “a universal character encoding detector for Python. It analyzes byte strings and returns the detected encoding, confidence score, and language.” A long and detailed post on Ars Technica explains what has happened recently:
The [chardet] repository was originally written by coder Mark Pilgrim in 2006 and released under an LGPL license that placed strict limits on how it could be reused and redistributed.
Dan Blanchard took over maintenance of the repository in 2012 but waded into some controversy with the release of version 7.0 of chardet last week. Blanchard described that overhaul as “a ground-up, MIT-licensed rewrite” of the entire library built with the help of Claude Code to be “much faster and more accurate” than what came before.
Licensing lies at the heart of open source. When Richard Stallman invented the concept of free software, he did so using a new kind of software license, the GPL. This allows anyone to use and modify software released under the GPL, provided they release their own code under the same license. As the above description makes clear, chardet was originally released under the LGPL – one of the GPL variants – but version 7.0 is licensed under the much more permissive MIT license. According to Ars Technica:
Advertisement
Blanchard says he was able to accomplish this “AI clean room” process by first specifying an architecture in a design document and writing out some requirements to Claude Code. After that, Blanchard “started in an empty repository with no access to the old source tree and explicitly instructed Claude not to base anything on LGPL/GPL-licensed code.”
That is, generative AI would appear to allow open source licenses like the GPL to be circumvented by rewriting the code without copying anything directly from the original. That’s possible because AI is now so good at coding that the results can be better than the original, as Blanchard proved with version 7.0 of chardet. And because it is new code, it can be released under any license. In fact, it is quite possible that code produced by genAI is not covered by copyright at all, for the same reason that artistic output created solely by AI can’t be copyrighted. If the license can be changed or simply cancelled in this way, then there is no way to force people to release their own variants only under the GPL, as Stallman intended. Similarly, the incentive for people to contribute their own improvements to the main version is diminished.
The ramifications extend even further. These kind of “AI clean room” implementations could be used to make new versions of any proprietary software. That’s been possible for decades – Stallman’s 1983 GNU project is itself a clean-room version of Unix – but generally requires many skilled coders working for long periods to achieve. The arrival of highly-capable genAI coding tools has brought down the cost by many orders of magnitude, which means it is relatively inexpensive and quick to produce new versions of any software.
In effect, generative AI coding systems make copyright irrelevant for software, both open source and proprietary. That’s because what is important about computer code is not the details of how it is written, but what it does. AI systems can be guided to create drop-in replacements for other software that are functionally identical, but with completely different code underneath.
Companies that license their proprietary software will probably still be able to do so by offering support packages plus the promise that they take legal responsibility for their code in a way that AI-generated alternatives don’t: businesses would pay for a promise of reliability plus the ability to sue someone when things go wrong. But for the open source world these are not relevant. As a result, the latest progress in AI coding seems a serious threat to the underlying development model that has worked well for the last 40 years, and which underpins most software in use today. But a wise post by Salvatore “antirez” Sanfilippo sees opportunities too:
Advertisement
AI can unlock a lot of good things in the field of open source software. Many passionate individuals write open source because they hate their day job, and want to make something they love, or they write open source because they want to be part of something bigger than economic interests. A lot of open source software is either written in the free time, or with severe constraints on the amount of people that are allocated for the project, or – even worse – with limiting conditions imposed by the companies paying for the developments. Now that code is every day less important than ideas, open source can be strongly accelerated by AI. The four hours allocated over the weekend will bring 10x the fruits, in the right hands (AI coding is not for everybody, as good coding and design is not for everybody).
Perhaps a new kind of open source will emerge – Open Source 2.0 – one in which people do not contribute their software patches to a project, as they do today, but instead send their prompts that produce better versions. People might start working directly on the prompts, collaborating on ways to fine tune them. It’s open source hacking but functioning at a level above the code itself.
One possibility is that such an approach could go some way to solving the so-called “Nebraska problem”: the fact that key parts of modern digital infrastructure are underpinned up by “a project some random person in Nebraska has been thanklessly maintaining since 2003”. That person may not receive many more thanks than they have in the past, but with AI assistants constantly checking, rewriting and improving the code, at least the selfless dedication to their project becomes a little less onerous, and thus a little less likely to lead to programmer burn out.
A new report dubbed “BrowserGate” warns that Microsoft’s LinkedIn is using hidden JavaScript scripts on its website to scan visitors’ browsers for installed extensions and collect device data.
According to a report by Fairlinked e.V., which claims to be an association of commercial LinkedIn users, Microsoft’s platform injects JavaScript into user sessions that checks for thousands of browser extensions and links the results to identifiable user profiles.
The author claims that this behavior is used to collect sensitive personal and corporate information, as LinkedIn accounts are tied to real identities, employers, and job roles.
“LinkedIn scans for over 200 products that directly compete with its own sales tools, including Apollo, Lusha, and ZoomInfo. Because LinkedIn knows each user’s employer, it can map which companies use which competitor products. It is extracting the customer lists of thousands of software companies from their users’ browsers without anyone’s knowledge,’ the report says.
Advertisement
“Then it uses what it finds. LinkedIn has already sent enforcement threats to users of third-party tools, using data obtained through this covert scanning to identify its targets.”
BleepingComputer has independently confirmed part of these claims through our own testing, during which we observed a JavaScript file with a randomized filename being loaded by LinkedIn’s website.
This script checked for 6,236 browser extensions by attempting to access file resources associated with a specific extension ID, a known technique for detecting whether extensions are installed.
This fingerprinting script was previously reported in 2025, but it was only detecting approximately 2,000 extensions at that time. A different GitHub repository from two months ago shows 3,000 extensions being detected, demonstrating that the number of detected extensions continues to grow.
Advertisement
Snippet of the list of extensions scanned for by LinkedIn’s script Source: BleepingComputer
While many of the extensions that are scanned for are related to LinkedIn, the script also strangely detected language and grammar extensions, tools for tax professionals, and other seemingly unrelated features.
The script also collects a wide range of browser and device data, including CPU core count, available memory, screen resolution, timezone, language settings, battery status, audio information, and storage features.
Gathering information about visitors’ devices Source: BleepingComputer
BleepingComputer could not verify the claims in the BrowserGate report about the use of the data or whether it is shared with third-party companies.
However, similar fingerprinting techniques have been used in the past to build unique browser profiles, which can enable tracking users across websites.
LinkedIn denies data use allegations
LinkedIn does not dispute that it detects specific browser extensions, telling BleepingComputer that the info is used to protect the platform and its users.
However, the company claims the report is from someone whose account was banned for scraping LinkedIn content and violating the site’s terms of use.
Advertisement
“The claims made on the website linked here are plain wrong. The person behind them is subject to an account restriction for scraping and other violations of LinkedIn’s Terms of Service.
To protect the privacy of our members, their data, and to ensure site stability, we do look for extensions that scrape data without members’ consent or otherwise violate LinkedIn’s Terms of Service.
Here’s why: some extensions have static resources (images, javascript) available to inject into our webpages. We can detect the presence of these extensions by checking if that static resource URL exists. This detection is visible inside the Chrome developer console. We use this data to determine which extensions violate our terms, to inform and improve our technical defenses, and to understand why a member account might be fetching an inordinate amount of other members’ data, which at scale, impacts site stability. We do not use this data to infer sensitive information about members.
For additional context, in retaliation for this website owner’s account restriction, they attempted to obtain an injunction in Germany, alleging LinkedIn had violated various laws. The court ruled against them and found their claims against LinkedIn had no merit, and in fact, this individual’s own data practices ran afoul of the law.
Advertisement
Unfortunately, this is a case of an individual who lost in the court of law, but is seeking to re-litigate in the court of public opinion without regard for accuracy.”
❖ LinkedIn
LinkedIn claims the BrowserGate report stems from a dispute involving the developer of a LinkedIn-related browser extension called “Teamfluence,” which LinkedIn says it restricted for violating the platform’s terms.
In documents shared with BleepingComputer, a German court denied the developer’s request for a preliminary injunction, finding that LinkedIn’s actions did not constitute unlawful obstruction or discrimination.
Advertisement
The court also found that automated data collection alone could infringe upon LinkedIn’s terms of use and that it was entitled to block the accounts to protect its platform.
LinkedIn argues the BrowserGate report is an attempt to re-litigate that dispute publicly.
Regardless of the reasons for the report, one point is undisputed.
LinkedIn’s site uses a fingerprinting script that detects over 6,000 extensions running in a Chromium browser, along with other data about a visitor’s system.
Advertisement
This is not the first time that companies have used aggressive fingerprinting scripts to detect programs running on a visitor’s device.
While eBay never confirmed why they were using these scripts, it was widely believed that they were used to block fraud on compromised devices.
It was later discovered that numerous other companies were using the same fingerprinting script, including Citibank, TD Bank, Ameriprise, Chick-fil-A, Lendup, BeachBody, Equifax IQ connect, TIAA-CREF, Sky, GumTree, and WePay.
Advertisement
Automated pentesting proves the path exists. BAS proves whether your controls stop it. Most teams run one without the other.
This whitepaper maps six validation surfaces, shows where coverage ends, and provides practitioners with three diagnostic questions for any tool evaluation.
Anthropic just pulled a move that’s… let’s just say, not going to win it many fans among power users. One of the most popular ways to supercharge Claude, using OpenClaw, is effectively being paywalled out of existence. And yeah, it’s as messy as it sounds.
Anthropic just made OpenClaw way more expensive
According to Anthropic Claude Code exec Boris Cherny, Anthropic has changed how Claude subscriptions work. Starting now, your regular Claude subscription no longer covers usage through third-party tools. Instead, that usage gets kicked into a separate pay-as-you-go billing system.
Starting tomorrow at 12pm PT, Claude subscriptions will no longer cover usage on third-party tools like OpenClaw.
You can still use these tools with your Claude login via extra usage bundles (now available at a discount), or with a Claude API key.
So what does that mean? Essentially, the hack that many users relied on, which was using Claude credits inside OpenClaw to run more advanced workflows, is basically dead. If someone still wants that setup, they’ll now have to pay extra on top of their subscription, either through usage bundles or API access.
OpenClaw
And it’s not like OpenClaw was some niche experiment either. It blew up because it could handle real-world tasks like emails, calendars, and even flight check-ins, turning Claude into something closer to an actual assistant. But that popularity seems to have backfired, reportedly putting pressure on Anthropic’s infrastructure and forcing this clampdown.
This feels less like a tweak… and more like a crackdown
Let’s be real, this isn’t just a pricing change, it’s a pretty clear signal. Anthropic seems to be drawing a hard line: if you’re using Claude in ways they didn’t design (or monetize properly), expect that door to close quickly.
There’s also a bit of strategy baked in here. By making third-party usage more expensive, Anthropic nudges users toward its own ecosystem, like Claude Cowork. That’s great for control, but not so great if you liked mixing and matching tools to build your own workflows. To soften the blow, the company is offering a one-time credit equal to a month’s subscription and discounted bundles. But let’s be honest, that feels more like a transition cushion than a real solution.
The Supreme Court’s decision last year in U.S. v. Skirmetti, upholding a law depriving young trans people the healthcare they need, is insupportable, rendering people unequal in a way the Constitution cannot possibly suborn. But its new decision in Chiles v. Salazar regarding the First Amendment standard to use regarding Colorado’s law regarding conversion therapy is different. Despite its similar subject matter relating to sexual orientation and gender identity sounding similar to Skirmetti, it’s actually another 303 Creative, another case that endorsed bigoted views unacceptably hostile to LGBTQ+ people. But for much the same reason that 303 Creative was an important articulation of the First Amendment’s expansive protection—despite the apparent prejudice the plaintiff (and the Court) advanced—so is this decision.
That’s what’s good about this decision, that it recognizes that the First Amendment operates in the professional licensing space and requires heightened scrutiny before states can be permitted to constrain licensing when those constraints are predicated on viewpoints expressed by the licensee, including as part of the provision of services. Heightened scrutiny is what makes the First Amendment’s protections meaningful, and the Court has not always been consistent or coherent in requiring it, particularly with respect to licensure. But when heightened scrutiny isn’t required, it becomes much harder to fight censorial actions taken by the government, including those driven by animus, and including those driven by anti-LGBTQ+ animus—which would also include those actions targeted at therapists supporting LGBTQ+ patients, such as those recently announced by Ken Paxton in Texas. This Supreme Court decision now makes it much, much harder for him to get away with silencing those therapists whose therapy affirmed their patients’ identity by putting their license at risk if they do.
The main problem with this decision however is that the Court picked a law prohibiting conversion therapy as the moment to finally articulate that heightened scrutiny applies with respect to licensing, including medical licensing. Conversion therapy, as Justice Jackson described in her dissenting opinion, is a scientifically-discredited approach “designed to ‘convert’ a person’s sexual orientation or gender identity, so that the person will become heterosexual or cisgender.” [Dissent p.3]. Historically it has been provided via “aversive modalities,” that many have likened to torture, such as “inducing nausea, vomiting, or paralysis in patients or subjecting them to severe electric shocks to telling patients to snap an elastic band on their wrists in response to nonconforming thoughts.” [Dissent p.3]
Importantly, however, to the extent that any law prohibits these practices, those laws remain in force—this decision does not affect such laws. (“The question before us is a narrow one. Ms. Chiles does not question that Colorado’s law banning conversion therapy has some constitutionally sound applications. She does not take issue with the State’s effort to prohibit what she herself calls ‘long-abandoned, aversive’ physical interventions.” [Majority p.7]). But it does reach conversion therapy delivered via talk therapy, where therapists “seek to encourage patients to change their behavior in an attempt to ‘change’ their identity” still are. [Dissent p.3]. As Jackson explained, this approach also causes real harm. [Dissent p.4-5]. And it’s a kind of harm that states like Colorado, who passed the law challenged here, have an interest in stopping. [Dissent p.5-7].
Advertisement
Making it hard for states to do so raises a number of concerns, such as that the decision will give a veneer of legitimacy to conversion therapy and stoke the hostile anti-LGBTQ+ attitudes driving it, as well as create the risk that conversion therapy, at least insofar as it includes talk therapy, might be something that minors could be legally subjected to in Colorado and elsewhere. There is also the fear that even if the Court has now articulated a good rule about heightened scrutiny it will only remember to apply it in cases like these where it will lead to results consistent with the Court majority’s biases—in other words, while the Court may be happy to subject Colorado’s anti-conversion therapy rule to strict scrutiny, there is the fear that it will conveniently forget to apply it to, say, Texas’s law trying to punish those who refuse to engage in it.
It also raises a collateral concern even on the speech-protection front, that subjecting licensure requirements to strict scrutiny could have the practical effect of diluting the standard. As Jackson also noted, we have long allowed states to regulate medical professionals, [Dissent p.8], as well as other licensed professionals like lawyers, and much of the regulation is directed to how licensed practitioners speak in some way as they provide their services. Perhaps all these efforts could actually pass strict scrutiny. In fact, it’s even still possible that Colorado’s law might yet survive it; although Justice Gorsuch’s majority opinion casts some doubt, the case is not over.
Rather than deciding it for themselves, the Court remanded the case back to the lower courts to this time apply the more exacting strict scrutiny standard rather than the less-demanding rational basis review they originally applied. Presumably there will be more opportunity for briefing and argument to show how the particular harm of conversion therapy creates the compelling state interest Colorado needed to act, and that its prohibition of licensed therapists from providing it via talk therapy is a remedy that is sufficiently narrowly tailored.
But the problem with applying strict scrutiny to so much regulation targeting licensing is that it might start to become too easy to satisfy when there are strong policy reasons to favor the government action, and as a result strict scrutiny will no longer be useful as a standard if it essentially allows everything, instead of being a meaningful filter. There are after all always compelling reasons for the government to care about the quality of the services licensees deliver via their professional expression, but just because the government has a valid reason to regulate does not mean that everything it does to regulate is constitutional.
Advertisement
Strict scrutiny also requires that the state action be narrowly tailored, in addition to being motivated by a compelling reason, and it’s too easy for courts to skip that part of the analysis, as we saw with the TikTok ban when it was somehow blessed by the DC Circuit. And the fear is that the more strict scrutiny is applied to what is fairly ordinary state regulation—of licensed practitioners—the more likely it will have the practical effect of creating precedent that dilutes the standard so that it is no longer so strict when we need it to be, especially for state action that is more exceptional. (On the TikTok ban the Supreme Court had greenlighted it using a lesser standard, which was itself extremely problematic as the ban should have been found unconstitutional, but at least the tool that should have applied to it remained sharp for future use, rather than dulled by this bad decision.)
On the other hand, a decision upholding the lower courts’ use of rational basis review would have done no one any favors. As Justice Kagan wrote in her concurrence, joined by Justice Sotomayor, it is easy to imagine a law that mirrors what the Colorado one does, prohibiting talk therapy that accepts LGBTQ+ identity instead of challenges it, and now advocates are left with a much more powerful tool to challenge it.
Of course, it does not matter what the State’s preferred side is. Consider a hypothetical law that is the mirror image of Colorado’s. Instead of barring talk therapy designed to change a minor’s sexual orientation or gender identity, this law bars therapy affirming those things. As Ms. Chiles readily acknowledges, the First Amendment would apply in the identical way. [Concurrence p.3]
As Texas shows, such a situation is not hypothetical. But now with this decision people challenging such censorial government efforts can turn to long-established First Amendment doctrine in their fight. And the doctrine remains stable, rather than something now swiss-cheesed with bespoke exceptions tied to certain policy preferences. No matter how valid those preferences, if they can be given special constitutional treatment then so can the bad ones. This decision helps buttress the guardrails preventing speech from being protected or not based on whether the government likes it, which is the whole reason we have the First Amendment, to make sure government preferences cannot dictate what views people can express.
Which is especially important when the courts cannot be trusted to overcome their biases to have good sense about which policy preferences are good and bad. The Supreme Court of course only has itself to blame that the public is so primed to believe that its decisions are driven by its biases and not neutral, sustainable doctrine. But nevertheless this decision still stands as an important declaration of law that is consistent with existing First Amendment jurisprudence and one that will ultimately leave everyone, including those challenging government actions attacking LGBTQ+ interests, far better off than if the Court had let the lower courts’ decisions invalidating the law stand after using a less speech-protective rule. In fact it will be an important one for anyone fighting censorship in any context, including those we generally talk about here, to use, because with this decision, the rule that has long been the rule remains the rule: when a government action non-incidentally touches on speech, is content-based, and is not viewpoint neutral, strict scrutiny applies.
Advertisement
Per this decision, a law targeting what therapists can say inherently involves speech, and not in an incidental way. And it targets it in a way that is not viewpoint-neutral; it has a specific preference, that conversion therapy is bad. As a result, as a law that targets the content of speech in a way that is not viewpoint-neutral, strict scrutiny, a more exacting standard than the rational basis review the lower courts had used, is required.
Turning to the merits, both the district court and the Tenth Circuit denied Ms. Chiles’s request for a preliminary injunction. The courts recognized that Ms. Chiles provides only “talk therapy.” And they acknowledged that Colorado’s law regulates the “verbal language” she may use. But, the courts held, the main thrust of the State’s law is to delineate which “treatments” and “therapeutic modalit[ies]” are permissible. Accordingly, the courts reasoned that Colorado’s law is best understood as regulating “professional conduct.” At most, they continued, Colorado’s law regulates speech only “incidentally” to professional conduct. As a result, the courts concluded, Colorado’s law triggers no more than “rational basis review” under the First Amendment, requiring the State to show merely that its law is rationally related to a legitimate governmental interest. Because the State satisfied that standard, the courts held that Ms. Chiles was not entitled to the relief she sought. [Majority p.6]
[…]
Consistent with the First Amendment’s jealous protections for the individual’s right to think and speak freely, this Court has long held that laws regulating speech based on its subject matter or “communicative content” are “presumptively unconstitutional.” Reed v. Town of Gilbert, 576 U. S. 155, 163 (2015). As a general rule, such “content-based” restrictions trigger “strict scrutiny,” a demanding standard that requires the government to prove its restriction on speech is “narrowly tailored to serve compelling state interests.” Ibid. Under that test, it is ” ‘rare that a regulation . . . will ever be permissible.’ ” Brown v. Entertainment Merchants Assn., 564 U. S. 786, 799 (2011) (quoting United States v. Playboy Entertainment Group, Inc., 529 U. S. 803, 818 (2000)).
We have recognized, as well, the even greater dangers associated with regulations that discriminate based on the speaker’s point of view. When the government seeks not just to restrict speech based on its subject matter, but also seeks to dictate what particular “opinion or perspective” individuals may express on that subject, “the violation of the First Amendment is all the more blatant.” Rosenberger v. Rector and Visitors of Univ. of Va., 515 U. S. 819, 829 (1995). “Viewpoint discrimination,” as we have put it, represents “an egregious form” of content regulation, and governments in this country must nearly always “abstain” from it. Ibid.; see also Iancu v. Brunetti, 588 U. S. 388, 393 (2019) (describing “the bedrock First Amendment principle that the government cannot discriminate” based on view-point (internal quotation marks omitted)); Good News Club v. Milford Central School, 533 U. S. 98, 112–113 (2001); Barnette, 319 U. S., at 642. [Majority p.8-9]
[…]
As applied here, Colorado’s law does not just regulate the content of Ms. Chiles’s speech. It goes a step further, prescribing what views she may and may not express. For a gay client, Ms. Chiles may express “[a]cceptance, support, and understanding for the facilitation of . . . identity exploration.” For a client “undergoing gender transition,” Ms. Chiles may likewise offer words of “[a]ssistance.” But if a gay or transgender client seeks her counsel in the hope of changing his sexual orientation or gender identity, Ms. Chiles cannot provide it. The law forbids her from saying anything that “attempts . . . to change” a client’s “sexual orientation or gender identity,” including anything that might represent an “effor[t] to change [her client’s] behaviors or gender expressions or . . . romantic attraction[s].” [Majority p.13]
But even if the law as it stands can’t survive strict scrutiny, in her concurrence, joined by Justice Sotomayor, Justice Kagan suggested ways the law might be amended so that it could be upheld.
It would, however, be less [likely to be unconstitutional] if the law under review was content based but viewpoint neutral. Such content-based laws, as the Court explains, trigger strict scrutiny “[a]s a general rule.” But our precedents respecting those laws recognize complexity and nuance. We apply our most demanding standard when there is any “realistic possibility that official suppression of ideas is afoot”—when, that is, a (merely) content-based law may reasonably be thought to pose the dangers that viewpoint-based laws always do. Davenport v. Washington Ed. Assn., 551 U. S. 177, 189 (2007). But when that is not the case—when a law, though based on content, raises no real concern that the government is censoring disfavored ideas—then we have not infrequently “relax[ed] our guard.” Reed, 576 U. S., at 183 (opinion of KAGAN, J.); see Davenport, 551 U. S., at 188 (noting the “numerous situations in which [the] risk” of a content-based law “driv[ing] certain ideas or viewpoints from the marketplace” is “attenuated” or “inconsequential, so that strict scrutiny is unwarranted”). Just two Terms ago, for example, the Court declined to apply strict scrutiny to a content-based but viewpoint-neutral trademark restriction. See Vidal v. Elster, 602 U. S. 286, 295 (2024); id., at 312 (BARRETT, J., concurring in part); id., at 329–330 (SOTOMAYOR, J., concurring in judgment). In the trademark context, as in some others, experience and reason alike showed “no significant danger of idea or viewpoint” bias. R. A. V., 505 U. S., at 388.
The same may well be true of content-based but viewpoint-neutral laws regulating speech in doctors’ and counselors’ offices.* Medical care typically involves speech, so the regulation of medical care (which is, of course, pervasive) may involve speech restrictions. And those restrictions will generally refer to the speech’s content. Cf. Reed, 576 U. S., at 177 (Breyer, J., concurring in judgment) (noting that “[r]egulatory programs” addressing speech “inevitably involve content discrimination”). But laws of that kind may not pose the risk of censorship—of “official suppression of ideas”—that appropriately triggers our most rigorous review. R. A. V., 505 U. S., at 390. And that means the “difference between viewpoint-based and viewpoint-neutral content discrimination” in the health-care context could prove “decisive.” Vidal, 602 U. S., at 330 (opinion of SOTOMAYOR, J.). Fuller consideration of that question, though, can wait for another day. We need not here decide how to assess viewpoint-neutral laws regulating health providers’ expression because, as the Court holds, Colorado’s is not one. [Concurrence p.3-4]
Ultimately, despite all of the concerns, the decision is still a good one that will leave everyone better off. And not just for cases that reach the Supreme Court but in every state and federal court hearing every challenge of laws trying to penalize certain views, including those accepting of LGBTQ+ identities. Whereas a decision to the contrary, one that would have allowed a rational basis standard to be the test for the law’s constitutionality, could be used to defend laws that, instead of fighting LGBTQ+ prejudice as this one tried to do, instead advanced it. As Texas illustrates, already there are examples of certain government actors attempting to impose their biased viewpoints via licensing requirements for therapists. This decision, even if it may stand as an individual reflection of LGBTQ+ animus by this Supreme Court, still makes further state action motivated by it that much harder for any government actor to impose.
I want to share a story of struggle. Actually, two kinds of struggle.
My father completed his doctorate at the University of Utah in the early 1970s. For his dissertation, he ran a statistical analysis on genealogical records to determine the impact of certain economic conditions on family size.
He accomplished this on one of the most advanced computers of the time. His method? Literally punching out little rectangles in dozens of stiff paper cards, and feeding the stack into the computer.
My father was a lowly graduate student, and because the demand for computing time at the university was sky high, he had to run his analysis in the middle of the night. He spent many nights punching cards and running them through the machine. Even a single mispunch would cause the entire program to stop running and require painstaking troubleshooting, re-punching, and another night at the computer lab.
Advertisement
Unproductive vs. Productive Struggle
The soul-sapping sleep deprivation and endless paper punching that stood between my father and his goals represents the first kind of struggle in my story: unproductive struggle — the challenging, unavoidable tasks we must perform toward a learning goal, but which add no value to the intellectual outcome.
The real intellectual challenge in my father’s work was in deciding which variables belonged in the model, determining how to represent economic conditions over time, and interpreting the data. This is the second kind of struggle: productive struggle. That is, the effort a learner expends to make sense of concepts, to figure something out that is not immediately apparent. This struggle leads to growth and insight. It builds judgment, expertise and understanding.
What is frustrating about my father’s story in hindsight is that so much of his time and cognitive energy were consumed by the unproductive struggle of punching cards and managing the computer. Without those barriers, he would have had more capacity for the productive struggle that leads to meaningful learning.
Thinking About What Matters
When it comes to AI in schools, some educators fear that it will lead to learning becoming too easy. This is referred to as “cognitive laziness.” The assumption is that we will offload our thinking to AI and eventually lose our ability to think critically. This is a risk with any technology that makes our mental work more efficient, and AI is uniquely adept at taking on cognitively demanding tasks. But ceding our reasoning power to AI isn’t a foregone conclusion. And simply not using AI in learning settings doesn’t have to be our solution for preserving our mental capacities.
Advertisement
Just as better computing tools would have freed my father from punching cards without removing the intellectual rigor of his work, today’s tools, including AI, have the potential to offload unproductive struggle, while preserving, and even amplifying, the productive struggle that is central to learning.
Here’s an example: When reading comprehension is not the goal of a lesson but a necessary prerequisite — a student having to read an article to understand the causes of the French Revolution, for example — AI tools can adjust reading levels on the fly to assist learners who are below grade level or for whom English is not their first language. This allows them to focus on the history rather than on decoding the text.
Refining Rigor
So what does this mean for educators who are grappling with how to help students use AI effectively?
First, we need to remind ourselves and help our students understand that the goal of learning has never been to make learning easy. It is to make it meaningful. We must ensure that learners are spending their time wrestling with big ideas, not battling logistics or bogged down by rote tasks.
Advertisement
Second, educators need to face a hard truth about the assignments we give students. Many assignments contain a mix of productive and unproductive struggle, and we are not always very intentional about which is which. Under crushing time and resource pressure, we can become unreflective about the distinction between productive and unproductive work. We inherit assignments, reuse problem sets, and value rigor without always asking where the rigor actually lies.
If AI forces us to confront that, it may be one of the most useful disruptions education has experienced in decades.
For instance, requiring students to write citations according to a set format may feel rigorous, but the cognitive work of formatting has little to do with the intellectual work of evaluating sources and integrating evidence into an argument. This shift requires us to redesign tasks, rethink assessments and, if necessary, let go of practices that feel rigorous but don’t meaningfully deepen understanding.
Sharpening Learning
If we do this well, AI won’t hollow out learning; it will sharpen it. It will give students more space to wrestle with ideas instead of mechanics, more time to interpret instead of transcribe, and more opportunity to make active sense of the world. It will give us a chance to be far more intentional about the kind of struggle we ask students to engage in.
Advertisement
In the end, AI won’t decide whether our students experience cognitive laziness or cognitive growth. We will decide that by how we design assignments and assessments, and by the choices we make about which AI tools to adopt and how we choose to use them.
This is our chance to weed out the punch cards and open up more time for students to struggle over things that truly matter.
Six months after renegotiating the contract that once barred it from independently pursuing frontier AI, Microsoft has released three in-house models that directly challenge the partner it spent $13 billion cultivating. MAI-Transcribe-1, MAI-Voice-1, and MAI-Image-2 are now available in Microsoft Foundry, and they do not carry OpenAI’s name anywhere on the label.
The models are the first publicly released output of the MAI Superintelligence team that Mustafa Suleyman, CEO of Microsoft AI, formed in November 2025 with a stated mission of pursuing what the company calls “humanist superintelligence.” In a March internal memo first reported by Business Insider, Suleyman wrote that he intended to focus all of his energy on superintelligence and deliver world-class models for Microsoft over the next five years. That ambition now has its first tangible evidence.
MAI-Transcribe-1 is, on paper, the most immediately disruptive of the three. The speech-to-text model claims the lowest word error rate across 25 languages on the FLEURS benchmark, averaging 3.8 per cent, and Microsoft says it outperforms OpenAI’s Whisper-large-v3 on all 25 languages, Google’s Gemini 3.1 Flash on 22 of 25, and ElevenLabs’ Scribe v2 on 15 of 25. It runs 2.5 times faster than Microsoft’s previous Azure Fast transcription service and is priced at $0.36 per hour of audio. Perhaps most revealing is the team that built it: just 10 people.
MAI-Voice-1 completes the audio loop. The text-to-speech model generates 60 seconds of natural-sounding audio in under one second on a single GPU and supports custom voice creation from a few seconds of sample audio. Combined with MAI-Transcribe-1 and a large language model of the customer’s choosing, it forms a complete voice pipeline that runs entirely on Microsoft infrastructure without any dependency on OpenAI’s technology.
MAI-Image-2, the oldest of the three, had alreadydebuted at number three on the Arena.ai text-to-image leaderboardin March, placing it behind only Google’s Gemini 3.1 Flash and OpenAI’s GPT Image 1.5. The model was developed in collaboration with photographers, designers, and visual storytellers, and WPP, one of the world’s largest marketing groups, is among the first enterprise partners building with it at scale.
Advertisement
The strategic context matters more than the benchmarks. Until the September 2025 renegotiation, Microsoft’s original partnership agreement with OpenAI contractually prevented the company from independently pursuing general AI development. The revised memorandum of understanding changed that calculus fundamentally. Microsoft retained licensing rights to everything OpenAI builds through 2032, gained $250 billion in new Azure cloud business commitments, and crucially won the freedom to build competing models. Suleyman acknowledged the pivot directly: the contract renegotiation, he said, enabled Microsoft to independently pursue its own superintelligence.
The timing is deliberate. Jacob Andreou, formerly a senior vice-president at Snap, took over as executive vice-president of Copilot on 17 March, freeing Suleyman from day-to-day product responsibilities. The MAI models landed barely two weeks later. Microsoft also hired Ali Farhadi, the former chief executive of the Allen Institute for AI, for Suleyman’s superintelligence team in March, a recruitment signal that the ambitions extend well beyond transcription and image generation.
For OpenAI, the development creates an awkward dynamic. Microsoft remains its single largest investor and its primary cloud infrastructure provider, and the two companies continue to share a platform in Foundry, which hosts both OpenAI and Microsoft models. ButOpenAI’s own push into commercial monetisationis accelerating in parallel, and the relationship is beginning to resemble two companies orbiting the same market with overlapping products rather than a partnership with a clear division of labour. OpenAI’s $110 billion raise in February, backed by SoftBank, Nvidia, and Amazon, valued the company independently of Microsoft at a level that makes the original partnership framing increasingly anachronistic.
The broader AI model market is fragmenting along similar lines.Anthropic’s $30 billion raise at a $380 billion valuationestablished it as a credible third force in enterprise AI, with run-rate revenue of $14 billion. Google continues to iterate rapidly on Gemini. The era in which OpenAI was the only game in town for frontier AI capabilities, and Microsoft was content to be its exclusive distribution channel, is definitively over.
Advertisement
Microsoft Foundry, the platform formerly known as Azure AI Foundry and before that Azure AI Studio (the second rebrand in twelve months), now serves developers at more than 80,000 enterprises including 80 per cent of Fortune 500 companies. That distribution advantage is what makes the MAI model family strategically significant: Microsoft does not need to beat OpenAI on every benchmark to shift enterprise spending toward in-house models. It needs to be competitive enough that customers choose the integrated option over the third-party alternative, a dynamic thatthe past year of AI industry consolidationhas made increasingly plausible.
Suleyman has said it will take another year or two before the superintelligence team produces frontier-class language models. What landed this week is the foundation: a multimodal toolkit that gives Microsoft its own voice, ears, and eyes independent of OpenAI. The $13 billion partnership is not ending. But the premise on which it was built, that Microsoft needed OpenAI to compete in AI, is beingquietly dismantled one model release at a time.
A new NYT Strands puzzle appears at midnight each day for your time zone – which means that some people are always playing ‘today’s game’ while others are playing ‘yesterday’s’. If you’re looking for Thursday’s puzzle instead then click here: NYT Strands hints and answers for Thursday, April 2 (game #761).
Strands is the NYT’s latest word game after the likes of Wordle, Spelling Bee and Connections – and it’s great fun. It can be difficult, though, so read on for my Strands hints.
Want more word-based fun? Then check out my NYT Connections today and Quordle today pages for hints and answers for those games, and Marc’s Wordle today page for the original viral word game.
Advertisement
SPOILER WARNING: Information about NYT Strands today is below, so don’t read on if you don’t want to know the answers.
Article continues below
NYT Strands today (game #762) – hint #1 – today’s theme
What is the theme of today’s NYT Strands?
• Today’s NYT Strands theme is… Early risers
NYT Strands today (game #762) – hint #2 – clue words
Play any of these words to unlock the in-game hints system.
Advertisement
SOLID
COIN
PROUD
BINGO
TWINS
SOIL
NYT Strands today (game #762) – hint #3 – spangram letters
How many letters are in today’s spangram?
• Spangram has 13 letters
NYT Strands today (game #762) – hint #4 – spangram position
What are two sides of the board that today’s spangram touches?
First side: left, 4th column
Last side: top, 5th column
Advertisement
Right, the answers are below, so DO NOT SCROLL ANY FURTHER IF YOU DON’T WANT TO SEE THEM.
Advertisement
NYT Strands today (game #762) – the answers
(Image credit: New York Times)
The answers to today’s Strands, game #762, are…
DAFFODIL
TULIP
HYACINTH
CROCUS
SNOWDROP
SPANGRAM: SPRINGBLOSSOM
My rating: Hard
My score: 2 hints
I quickly abandoned my search for jobs that required an early morning start (mainly because I couldn’t think of any).
Sign up for breaking news, reviews, opinion, top tech deals, and more.
Fortunately in my hunt for non-game words I found ‘blossom” and then “spring” and stuck them together for today’s spangram.
However, after finding the most obvious of SPRINGBLOSSOM with DAFFODIL I reached the limits of my floral knowledge and needed two hints to see me virtually to a full bouquet.
Advertisement
Yesterday’s NYT Strands answers (Thursday, April 3, game #761)
GUAVA
ACAI
PINEAPPLE
LYCHEE
MANGO
PAPAYA
SPANGRAM: TROPICALFRUIT
What is NYT Strands?
Strands is the NYT’s not-so-new-any-more word game, following Wordle and Connections. It’s now a fully fledged member of the NYT’s games stable that has been running for a year and which can be played on the NYT Games site on desktop or mobile.
I’ve got a full guide to how to play NYT Strands, complete with tips for solving it, so check that out if you’re struggling to beat it each day.
You must be logged in to post a comment Login