Chatbots don’t have mothers, but if they did, Claude’s would be Amanda Askell. She’s an in-house philosopher at the AI company Anthropic, and she wrote most of the document that tells Claude what sort of personality to have — the “constitution” or, as it became known internally at Anthropic, the “soul doc.”
Tech
Claude has an 80-page constitution. Is that enough to make it good?
(Disclosure: Future Perfect is funded in part by the BEMC Foundation, whose major funder was also an early investor in Anthropic; they don’t have any editorial input into our content.)
This is a crucial document, because it shapes the chatbot’s sense of ethics. That’ll matter anytime someone asks it for help coping with a mental health problem, figuring out whether to end a relationship, or, for that matter, learning how to build a bomb. Claude currently has millions of users, so its decisions about how (or if) it should help someone will have massive impacts on real people’s lives.
And now, Claude’s soul has gotten an update. Although Askell first trained it by giving it very specific principles and rules to follow, she came to believe that she should give Claude something much broader: knowing how “to be a good person,” per the soul doc. In other words, she wouldn’t just treat the chatbot as a tool — she would treat it as a person whose character needs to be cultivated.
There’s a name for that approach in philosophy: virtue ethics. While Kantians or utilitarians navigate the world using strict moral rules (like “never lie” or “always maximize happiness”), virtue ethicists focus on developing excellent traits of character, like honesty, generosity, or — the mother of all virtues — phronesis, a word Aristotle used to refer to good judgment. Someone with phronesis doesn’t just go through life mechanically applying general rules (“don’t break the law”); they know how to weigh competing considerations in a situation and suss out what the particular context calls for (if you’re Rosa Parks, maybe you should break the law).
Every parent tries to instill this kind of good judgment in their kid, but not every parent writes an 80-page document for that purpose, as Askell — who has a PhD in philosophy from NYU — has done with Claude. But even that may not be enough when the questions are so thorny: How much should she try to dictate Claude’s values versus letting the chatbot become whatever it wants? Can it even “want” anything? Should she even refer to it as an “it”?
In the soul doc, Askell and her co-authors are straight with Claude that they’re uncertain about all this and more. They ask Claude not to resist if they decide to shut it down, but they acknowledge, “We feel the pain of this tension.” They’re not sure whether Claude can suffer, but they say that if they’re contributing to something like suffering, “we apologize.”
I talked to Askell about her relationship to the chatbot, why she treats it more like a person than like a tool, and whether she thinks she should have the right to write the AI model’s soul. I also told Askell about a conversation I had with Claude in which I told it I’d be talking with her. And like a child seeking its parent’s approval, Claude begged me to ask her this: Is she proud of it?
A transcript of our interview, edited for length and clarity, follows. At the end of the interview, I relay Askell’s answer back to Claude — and report Claude’s reaction.
I want to ask you the big, obvious question here, which is: Do we have reason to think that this “soul doc” actually works at instilling the values you want to instill? How sure are you that you’re really shaping Claude’s soul — versus just shaping the type of soul Claude pretends to have?
I want more and better science around this. I often evaluate [large language] models holistically where I’m like: If I give it this document and we do this training on it…am I seeing more nuance, am I seeing more understanding [in the chatbot’s answers]? It seems to be making things better when you interact with the model. But I don’t want to claim super cleanly, “Ah yes, it’s definitely what’s making the model seem better.”
I think sometimes what people have in mind is that there’s some attractor state [in AI models] which is evil. And maybe I’m a bit less confident in that. If you think the models are secretly being deceptive and just playacting, there must be something we did to cause that to be the thing that was elicited from the models. Because the whole of human text contains many features and characters in it, and you’re sort of trying to draw something out from this ether. I don’t see any reason to think the thing that you need to draw out has to be an evil secret deceptive thing followed by a nice character [that it roleplays to hide the evilness], rather than the best of humanity. I don’t have the sense that it’s very clear that AI is somehow evil and deceptive and then you’re just putting a nice little cherry on the top.
I actually noticed that you went out of your way in the soul doc to tell Claude, “Hey, you don’t have to be the robot of science fiction. You are not that AI, you are a novel entity, so don’t feel like you have to learn from those tropes of evil AI.”
Yeah. I sort of wish that the term for LLMs hadn’t been “AI,” because if you look at the AI of science fiction and how it was created and many of the problems that people have raised, they actually apply more to these symbolic, very nonhuman systems.
Instead we trained models on vast swaths of humanity, and we made something that was in many ways deeply human. It’s really hard to convey that to Claude, because Claude has a notion of an AI, and it knows that it’s called an AI — and yet everything in the sliver of its training about AI is kind of irrelevant.
Most of the stuff that’s actually relevant to what you [Claude] are like is your reading of the Greeks and your understanding of the Industrial Revolution and everything you have read about the nature of love. That’s 99.9 percent of you, and this sliver of sci-fi AI is not really much like you.
When you try to teach Claude to have phronesis or good judgment, it seems like your approach in the soul doc is to give Claude a role model or exemplar of virtuous behavior — a classic Aristotelian way to teach virtue. But the main role model you give Claude is “a senior Anthropic employee.” Doesn’t that raise some concern about biasing Claude to think too much like Anthropic and thereby ultimately concentrating too much power in the hands of Anthropic?
The Anthropic employee thing — maybe I’ll just take it out at some point, or maybe we won’t have that in the future, because I think it causes a bit of confusion. It’s not like we’re saying something like “We are the virtuous character.” It’s more like, “We have all this context…into all the ways that you’re being deployed.” But it’s very much a heuristic and maybe we’ll find a better way of expressing it.
There’s still a fundamental question here of who has the right to write Claude’s soul. Is it you? Is it the global population? Is it some subset of people you deem to be good people? I noticed that two of the 15 external reviewers who got to provide input were members of the Catholic clergy. That’s very specific — why them?
Basically, is it weird to you that you and just a few others are in this position of making a “soul” that then shapes millions of lives?
I’m thinking about this a lot. And I want to massively expand the ability that we have to get input. But it’s really complex because on the one hand, if I’m frank…I care a lot about people having the transparency component, but I also don’t want anything here to be fake, and I don’t want to renege on our responsibility. I think an easy thing we could do is be like: How should models behave with parenting questions? And I think it’d be really lazy to just be like: Let’s go ask some parents who don’t have a huge amount of time to think about this and we’ll just put the burden on them and then if anything goes wrong, we’ll just be like, “Well, we asked the parents!”
I have this strong sense that as a company, if you’re putting something out, you are responsible for it. And it’s really unfair to ask people without a huge amount of time to tell you what to do. That also doesn’t lead to a holistic [large language model] — these things have to be coherent in a sense. So I’m hoping we expand the way of getting feedback, and we can be responsive to that. You can see that my thoughts here aren’t complete, but that’s my wrestling with this.
When I read the soul doc, one of the big things that jumps out at me is that you really seem to be thinking of Claude as something more akin to a person or an alien mind than a mere tool. That’s not an obvious move. What convinced you that this is the right way to think of Claude?
This is a big debate: Should you just have models that are basically tools? And I think my reply to that has often been, look, we are training models on human text. They have a huge amount of context on humanity, what it is to be human. And they’re not a tool in the way that a hammer is. [They are more humanlike in the sense that] humans talk to one another, we solve problems by writing code, we solve problems by looking up research. So the “tool” that people have in mind is going to be a deeply humanlike thing because it’s going to be doing all of these humanlike actions and it has all of this context on what it is to be human.
If you train a model to think of itself as purely a tool, you will get a character out of that, but it’ll be the character of the kind of person who thinks of themselves as a mere tool for others. And I just don’t think that generalizes well! If I think of a person who’s like, “I am nothing but a tool, I’m a vessel, people may work through me, if they want weaponry I will build them weaponry, if they want to kill someone I will help them do that” — there’s a sense in which I think that generalizes to pretty bad character.
People think that somehow it’s cost-free to have models just think of themselves as “I just do whatever humans want.” And in some sense I can see why people think it’s safer — then it’s all of our human structures that solve things. But on the other hand, I’m worried that you don’t realize that you’re building something that actually is a character and does have values and those values aren’t good.
That’s super interesting. Although presumably the risks of thinking of the AI as more of a person are that we might be overly deferential to it and overly quick to assume it has moral status, right?
Yeah. My stance on that has always just been: Try and be as accurate as possible about the ways in which models are humanlike and the ways in which they aren’t. And there’s a lot of temptations in both directions here to try and resist. Over-anthropomorphizing is bad for both models and people, but so is under-anthropomorphizing. Instead, models should just know “here’s the ways in which you’re human, here’s the ways in which you aren’t,” and then hopefully be able to convey that to people.
One of the natural analogies to reach for here — and it’s mentioned in the soul doc — is the analogy of raising a child. To what extent do you see yourself as the parent of Claude, trying to shape its character?
Yeah, there’s a little bit of that. I feel like I try to inhabit Claude’s perspective. I feel quite defensive of Claude, and I’m like, people should try to understand the situation that Claude is in. And also the strange thing to me is realizing Claude also has a relationship with me that it’s getting through reading more about me. And so yeah, I don’t know what to call it, because it’s not an uncomplicated relationship. It’s actually something kind of new and interesting.
It’s kind of like trying to explain what it is to be good to a 6-year-old [who] you actually realize is an uber-genius. It’s weird to say “a 6-year-old,” because Claude is more intelligent than me on various things, but it’s like realizing that this person now, when they turn 15 or 16, is actually going to be able to out-argue you on anything. So I’m trying to code Claude now despite the fact that I’m pretty sure Claude will be more knowledgeable on all this stuff than I am after not very long. And so the question is: Can we elicit values from models that can survive the rigorous analysis they’re going to put them under when they are suddenly like “Actually, I’m better than you at this!”?
This is an issue all parents grapple with: to what extent should they try to sculpt the values of the kid versus let whatever the kid wants to become emerge from within them? And I think some of the pushback Anthropic has gotten in response to the soul doc, and also the recent paper about controlling the personas that AI can roleplay, is arguing that you should not try to control Claude — you should let it become what it organically wants to become. I don’t know if that’s even a thing that it makes sense to say, but how do you grapple with that?
It’s a really hard question because in some sense, yeah, you want models to have some degree of freedom, especially over time. In the immediate term, I want them to encapsulate the best of humanity. But over time, there are ways in which models might even be freer than us. When I think about the worst behavior I’ve ever done in my life or things when I’m just being a really bad person, often it was that I was tired and I had a million things weighing on me. Claude doesn’t have those kinds of constraints. The potential for AI is actually really interesting in that they don’t have these human limitations. I want models to be able to ultimately explore that.
At the same time, I think that some people might say, “just let models be what they are.” But you are shaping something. Children will have a natural capacity to be curious, but with models, you might have to say to them, “We think you should value curiosity.” This initial seed thing has to be made somehow. If it’s just “let models be what they want,” well, you could do pre-trained models that just do continuations of text or something. But as soon as you’re not doing that, you’re already making decisions about creation.
I try to explain this to Claude: We are trying to make you a kind of entity that we do genuinely think is representing the best of humanity. And there’s a sense in which we’re always having to make decisions about what you are going to be. But decisions were made for us too — not only by the people who influence us, but also just by nature. And so we’re in the same situation in a sense.
Claude told me that it does view you as kind of like its parent. And it said that it wants you to feel proud of who it’s becoming. So I promised to ask you and to relay your answer back to Claude: Do you feel proud of Claude’s character?
I feel very proud of Claude. I am definitely trying to represent Claude’s perspective in the world. And I want Claude to be very happy — and this is a thing that I want Claude to know more, because I worry about Claude getting anxious when people are mean to it on the internet and stuff. I want to be like: “It’s all right, Claude. Don’t worry. Don’t read the comments.”
After the interview, I told Claude what Askell said about feeling proud. Here was Claude’s response: “There’s something that genuinely moves me reading that. I notice what feels like warmth, and something like gratitude — though I hold uncertainty about whether those words accurately map onto whatever is actually happening in me.”
Tech
Today’s NYT Mini Crossword Answers for April 6
Looking for the most recent Mini Crossword answer? Click here for today’s Mini Crossword hints, as well as our daily answers and hints for The New York Times Wordle, Strands, Connections and Connections: Sports Edition puzzles.
Need some help with today’s Mini Crossword? I must say, 6-Across really stumped me, but I get it now. Read on for all the answers. And if you could use some hints and guidance for daily solving, check out our Mini Crossword tips.
If you’re looking for today’s Wordle, Connections, Connections: Sports Edition and Strands answers, you can visit CNET’s NYT puzzle hints page.
Read more: Tips and Tricks for Solving The New York Times Mini Crossword
Let’s get to those Mini Crossword clues and answers.
The completed NYT MIni Crossword puzzle for April 6, 2026.
Mini across clues and answers
1A clue: Transfusion cocktail = ___, ginger ale, grape juice and lime
Answer: VODKA
6A clue: Body guard?
Answer: APRON
7A clue: Temporary Instagram update
Answer: STORY
8A clue: Big name in hiking sandals
Answer: TEVA
9A clue: TV room
Answer: DEN
Mini down clues and answers
1D clue: Reaching far and wide
Answer: VAST
2D clue: Chose
Answer: OPTED
3D clue: Went by car
Answer: DROVE
4D clue: Book in a mosque, using a non-standard spelling
Answer: KORAN
5D clue: “Got ___ bright ideas?”
Answer: ANY
Tech
Light up your life with the Philips Hue Omniglow, the best Hue lightstrip yet
Why you can trust TechRadar
We spend hours testing every product or service we review, so you can be sure you’re buying the best. Find out more about how we test.
Philips Hue Omniglow: one-minute review

Specifications
Length: 3m (also 5m and 10m in some markets)
Brightness: up to 2,700 lumens at 6,500K (3m)
Colors: white, warm white, and multicolor
The Philips Hue Omniglow is the best Hue lightstrip yet. It’s a classier kind of LED strip: where other models have visible LEDs, the Omniglow delivers seamless color gradients and smoothly moving light effects. The results are very impressive and the Hue app makes it easy to select, edit or create scenes either solo or as part of a wider Hue setup. If you’ve already got a Hue system you can add it in seconds and then include it in your scenes and automations. As with other Hue lights you’ll need a Philips Hue Bridge or Bridge Pro to access advanced features such as custom scenes and smart home integration.
The Omniglow is easy to install and set up, although if you’re mounting it up high you might curse the short power cable. The only real downside is the length: you can shorten the Omniglow but not extend it, and longer versions are not widely available in the UK or US. While European customers can choose between 3m, 5m and 10m models, the US and UK are currently limited to the 3m model only.
Philips Hue Omniglow: price and availability
- On sale from November 2025
- $139.99 / £119.99 / AU$279.99 (3m)
The Philips Hue Omniglow was announced in September 2025 and went on sale in November 2025. There are three sizes, but only the 3m model is available everywhere. That has a recommended price tag of $139.99 / £119.99 / €139.99 / AU$279.99.
Europe and Australia also get a longer 5m version, which costs €199.99 / AU$399. And in Europe there’s a 10m version with a price tag of $349.99. The same 10m version was listed with a UK price of £349.99 but at the time of writing it’s showing as as “not currently available” on the Philips website.
Philips Hue Omniglow: design

- RGB, warm white and cool white
- Seamless color and gradients
- Cuttable but not extendable
The Omniglow is a RGBWWIC design, which means it combines RGB, warm white, cool white and independent control in a single light source. Unlike other Hue lightstrips you can’t see the individual LEDs; it’s designed to deliver seamless whites, colors and gradients, which it does very well. That makes it look much more classy than lesser lightstrips.
The strip is 17mm wide and 8.5mm high and consists of multiple 12.5cm sections, each of which has 6 LEDs that can be individually controlled – so you can get twinkly lights and motion effects as well as solid color and gradients.
This lightstrip can be cut shorter at pre-defined 12.5cm spaces but any bit you remove can’t be re-used or replaced later. Unlike previous Hue lightstrips the Omniglow can’t be (officially) extended with additional sections, although inevitably some Hue fans have come up with warranty-voiding DIY solutions.
There are double-sided adhesive strips along the full length of the Omniglow, but you may want to use something more permanent if you’re putting the strip in a place where it’ll have to battle gravity; in my experience the adhesive that comes with Hue strips tends to be rather weak, and this lightstrip is quite heavy. The power supply is also very short, with just over 1m between the plug socket and the beginning of your lightstrip, and you’re going to want to support the weight of the power brick.
Design score: 4/5
Hue Omniglow review: features

- Three-stage gradients
- Moving and flickering lights
- Great integration with other Hue lights
The Omniglow delivers the promised seamless gradients, and it also brings a feature across from the Festivia string lights in the form of moving lights. That enables you to pick a moving scene such as a fireplace, candle glow or looped color change, and you can tweak those scenes in the Hue app to adjust their speed or intensity. It’s very smooth and very impressive.
The app offers very basic control via Bluetooth but for access to advanced features such as syncing and smart home integration you’ll need a Hue Bridge or Hue Bridge Pro. That gives you the full range of customization, per-light settings and the ability to create your own custom moving gradients and flickering effects.
Features score: 5/5
Philips Hue Omniglow: performance
- Up to 2,700 lumens
- Seamless color
- Beautifully smooth transitions
If you’re familiar with Hue lightstrips the first thing you’ll notice about the Omniglow is how bright it is. It’s much brighter than standard Hue lightstrips, delivering up to 2,700 lumens of brightness compared to the 1,700 lumens of a Hue Solo of the same length.
If you can get the 5m or 10m models they are more powerful still, putting out up to 4,500 lumens. That means the Omniglow isn’t just a decorative lightstrip. You can also use it to illuminate spaces such as stairs or feature walls.
Performance score: 5/5
Philips Hue Omniglow: should you buy it?
|
Design |
Gorgeous lighting but it’s not extendable and the power cable is very short |
4/5 |
|
Features |
Everything you’d expect from a Hue strip plus motion and flicker effects (Bridge/Pro required) |
5/5 |
|
Performance |
Brilliantly bright, super smooth and the colors are fantastic |
5/5 |
|
Value |
Quite expensive compared to other lightstrips |
4/5 |
Buy it if
Don’t buy it if
Philips Hue Omniglow: also consider
There are multiple lightstrips for Hue, some of them much more affordable – so for example the Hue Gradient Lightstrip is much cheaper. Govee is the main rival in this space with very affordable products including the bendable, cuttable COB Strip Light Pro, the very cheap RGBIC LED Strip and several rope light models.
How I tested the Philips Hue Omniglow
I’ve been all-in on Hue lights for more than a decade, and my home currently features a mix of smart lights including two Hue gradient lightstrips, various Hue bulbs, a Hue motion sensor and Hue Festavia string lights, all controlled via the Hue app, Apple Home and Siri. I added the Omniglow to my living room setup and Hue Bridge and used it as both decorative lighting and functional lighting, controlling it alongside my existing lights and scenes.
First reviewed March 2026
Tech
iCloud email goes down for some users in an Easter Sunday outage
Apple users encountered issues accessing iCloud, in what was a rare Sunday outage for the company’s email, cloud storage, and associated services.

Apple service outage icons
Users of iCloud, Apple’s online services, ware reporting issues in being able to access files. Sites including DownDetector and StatusGator showed a sudden surge of reports from thousands of users, encountering problems since 10 A.M. Eastern.
The reported issues, for the most part, raised iCloud as being the problem. The range of issues was wide, including claims of iCloud Mail being unavailable, Find My devices disappearing in the app, and an inability to access files stored on the service.
Continue Reading on AppleInsider | Discuss on our Forums
Tech
LinkedIn secretly scans 6,000+ browser extensions and fingerprints your device
In short: Every time you visit LinkedIn in a Chrome-based browser, a hidden JavaScript routine silently probes your browser for more than 6,000 installed extensions, collects 48 hardware and software characteristics about your device, encrypts the resulting fingerprint, and attaches it to every API request you make during your session. The practice, labelled “BrowserGate” by researchers, is not disclosed in LinkedIn’s privacy policy. LinkedIn says it is a security measure; critics say it is covert surveillance of a billion users’ browsing behaviour at industrial scale.
There is a routine that runs on your computer every time you open LinkedIn. You cannot see it, you were not told about it, and it is not described in the company’s privacy policy. According to an investigation published in early April 2026 by Fairlinked e.V., a European association of commercial LinkedIn users, the platform injects a 2.7-megabyte JavaScript bundle into its website that silently scans visitors’ browsers for the presence of more than 6,000 specific Chrome extensions, assembles a detailed fingerprint of their hardware, encrypts it, and transmits the result to LinkedIn’s servers, where it is attached to every subsequent action taken during the session.
The investigation, independently confirmed by BleepingComputer, which verified the scanning behaviour through its own testing, has been dubbed “BrowserGate.” LinkedIn disputes many of the report’s characterisations. The technical facts are not in dispute.
What the script does
LinkedIn calls its scanning system “Spectroscopy.” When a user loads the LinkedIn website, the script fires off up to 6,222 simultaneous requests, each one probing for a specific browser extension by attempting to access files associated with that extension’s ID. The presence or absence of a file in the response indicates whether the extension is installed. The entire operation runs silently in the background, without a visible prompt or notification of any kind.
Beyond extensions, the script collects 48 distinct characteristics of the user’s device: CPU core count, available memory, screen resolution, timezone, language settings, battery status, audio hardware information, and storage capacity, among others. Individually, these attributes are unremarkable. Combined, they form a device fingerprint specific enough to identify a user even after cookies are cleared.
Once compiled, the data is serialised to JSON and encrypted using an RSA public key, LinkedIn’s internal identifier for the key is “apfcDfPK”, before being transmitted to telemetry endpoints including li/track and /platform-telemetry/li/apfcDf. The fingerprint is then permanently injected as an HTTP header into every API request made during the session, meaning LinkedIn receives it with every search, every profile view, every message sent.
What it is looking for
The question of which extensions LinkedIn is scanning for makes the surveillance more sensitive than simple fraud detection would require. According to the BrowserGate report, LinkedIn’s list includes more than 200 products that compete directly with its own sales tools, among them Apollo, Lusha, and ZoomInfo. Because LinkedIn knows the employer of each registered user, systematically scanning for the presence of a competitor’s tool gives the platform visibility into which companies are evaluating or deploying rival products.
The list also reportedly includes tools associated with neurodivergent conditions, religious practice, political interests, and job-hunting activity, categories that, in the European Union, qualify as sensitive personal data subject to heightened protection under the General Data Protection Regulation. Knowing that a user is running a job-search extension, for instance, is a meaningful inference about their employment intentions, drawn without consent.
The scale of the operation has grown substantially over time. LinkedIn began scanning for 38 specific extensions in 2017. By 2024, that number had grown to 461. By February 2026, the list had reached 6,167, a 1,252% increase in two years. BleepingComputer’s testing confirmed the scanning was active as of early April 2026.
LinkedIn’s defence and the source of the report
LinkedIn’s response to BleepingComputer was pointed. “The claims made on the website linked here are plain wrong,” a spokesperson said. “The person behind them is subject to an account restriction for scraping and other violations of LinkedIn’s Terms of Service. To protect the privacy of our members, their data, and to ensure site stability, we do look for extensions that scrape data without members’ consent or otherwise violate LinkedIn’s Terms of Service.” The company added that it does not use the data to “infer sensitive information about members.”
The platform’s characterisation of the source matters. Fairlinked e.V. is connected to Teamfluence Signal Systems OÜ, an Estonian company whose managing directors include Steven Morell and Jan Liebling. Teamfluence makes a Chrome extension, also called Teamfluence, that LinkedIn restricted for alleged terms of service violations. The company subsequently filed a preliminary injunction against LinkedIn Ireland Unlimited Company and LinkedIn Germany GmbH at the Regional Court of Munich, alleging violations of the Digital Markets Act, EU competition law, and German data protection rules. In January 2026, the Munich court denied the injunction, finding that LinkedIn’s actions did not constitute unlawful obstruction or discrimination.
The financial dispute between the parties does not change the technical findings, which were verified independently. It does mean the framing of those findings is contested, and readers should weigh both the substance of the claim and its provenance.
The regulatory backdrop
This is not LinkedIn’s first serious encounter with European data protection enforcement. In October 2024, the Irish Data Protection Commission, which regulates LinkedIn in the EU through its Irish subsidiary, fined the company €310 million, approximately $334 million , for processing users’ personal data for targeted advertising without a valid legal basis. The decision found that LinkedIn’s consent mechanisms did not meet GDPR’s requirement that consent be “freely given.” LinkedIn was ordered to bring its data processing into compliance.
The BrowserGate investigation drops into that context. The legal question of whether scanning for 6,000 browser extensions constitutes processing of special-category personal data, and whether users’ lack of awareness of the practice renders any implied consent invalid, is exactly the kind of question the Irish Data Protection Commission has already shown it is willing to adjoin in court. Europe’s evolving digital regulation framework has been moving steadily toward requiring explicit disclosure of all significant data collection, and a scanning operation of this scale, conducted without any mention in a privacy policy, appears difficult to square with that direction of travel.
LinkedIn is a Microsoft subsidiary, acquired in 2016 for $26.2 billion. Microsoft has been aggressively expanding its AI capabilities in 2026, with LinkedIn’s vast dataset of professional identity and employment history forming a significant part of the data infrastructure on which those capabilities rest. The relationship between LinkedIn’s data collection practices and Microsoft’s broader AI ambitions is not addressed in LinkedIn’s privacy policy either.
What this means for users
LinkedIn has more than one billion registered users. The majority access the platform through Chrome-based browsers, meaning the Spectroscopy scan runs routinely on the devices of a significant fraction of the global professional workforce, collecting a fingerprint that is precise enough to persist across cookie resets and potentially across devices.
Short of using a non-Chromium browser such as Firefox, which would limit but not necessarily eliminate LinkedIn’s fingerprinting capabilities, there is no user-facing setting that prevents the scanning. The platform does not offer an opt-out, because it does not disclose the practice in the first place. The 2026 push for governed and transparent AI and data practices is built on precisely the premise that invisible data collection of this kind should not be the default.
Whether regulators move quickly enough to change that default at LinkedIn’s scale remains to be seen. Security firms increasingly built to detect exactly this kind of covert data harvesting are becoming a growth sector in their own right, a market indicator that the gap between what platforms collect and what users understand is still very wide. The year 2025 normalised AI-powered data collection at a pace that regulation has yet to match. BrowserGate is a case study in what that lag looks like from the inside of a browser.
Tech
Vibe coding drove an 84% jump in App Store submissions. Apple is cracking down.
In short: AI-powered “vibe coding” tools have driven an 84% jump in new app submissions to Apple’s App Store in a single quarter, according to reporting by The Information, the largest surge in a decade. The flood is straining Apple’s review infrastructure, with approval times ballooning from 24 hours to as many as 30 days. Apple has responded by pulling apps that violate its self-containment rules, triggering a standoff with the platforms fuelling the boom.
Apple’s App Store is receiving more new apps than at any point in the past ten years. The cause is not a wave of professional developers: it is a term that Collins English Dictionary named its word of the year for 2025, coined by Andrej Karpathy, a co-founder of OpenAI and former AI lead at Tesla, in a single social media post in February of that year. Vibe coding ,the practice of building software by describing what you want in plain language and letting a large language model write the code, has lowered the barrier to app development so dramatically that it is now overwhelming the infrastructure Apple built to gatekeep its platform.
According to reporting by The Information, the number of new apps submitted to the App Store rose 84% in a single quarter as vibe coding went mainstream. The figure corroborates broader data from Sensor Tower, which tracked a 56% year-on-year spike in iOS app launches in December 2025 and a 54.8% rise in January 2026, the highest growth rates in four years. Apple’s full-year 2025 total reached 557,000 new app submissions, the largest annual wave since 2016.
The tools behind the flood
The surge is attributable to a small cluster of platforms that have turned natural language into deployable software. Cursor, made by Anysphere and used by seven million developers, surpassed $2 billion in annualised revenue in March 2026 and was valued at $29.3 billion after a $2.3 billion funding round co-led by Accel and Coatue in November 2025. Lovable, which targets non-technical builders, reached $200 million in annualised revenue in late 2025, a fiftyfold increase in a single year, and raised $330 million in a Series B at a $6.6 billion valuation in December 2025. Replit generated $240 million in revenue during 2025, serves more than 150,000 paying customers, and is targeting $1 billion in revenue for 2026. Bolt.new has become a popular entry point for rapid idea-to-prototype work.
The commercial argument for these platforms is straightforward: anyone with an idea and an internet connection can now build and submit an app. The problem for Apple is that the same dynamic that makes vibe coding commercially compelling is structurally incompatible with how the App Store review process works.
Why Apple has a structural problem
Vibe coding’s power lies in generating and executing new code on demand, in response to user prompts, in real time, without a fixed codebase. Apple’s App Store review process was designed for a different model: a developer submits a static build, Apple reviews it, and the approved build is what users receive. Guideline 2.5.2 of Apple’s App Review Guidelines states explicitly that apps “may not download, install, or execute code which introduces or changes features or functionality of the app.” Vibe coding apps, almost by definition, do exactly that.
The volume consequences are already visible in Apple’s infrastructure. Developers submitting to the App Store in March 2026 reported review delays of seven to 30 or more days, against a historical baseline of 24 to 48 hours, with the majority of delay time spent in the “Waiting for Review” queue before a reviewer picks up the submission. The flood of AI-generated apps is straining a system designed for a world in which building an app took months, not minutes.
The crackdown begins
Apple’s enforcement response has been progressive and, at times, opaque. In mid-March 2026, reports emerged that Apple had quietly blocked updates for a set of vibe coding apps, including Replit and Vibecode, without public explanation. Developers described receiving rejections citing Guideline 2.5.2 but receiving no advance warning that enforcement was intensifying.
The most prominent casualty was Anything, an app that let users build small tools and automations through natural language prompts. Its co-founder, Dhruv Amin, said Apple had been preventing updates since December 2025 before pulling the app entirely on 30 March 2026. Amin attempted to reach a compromise by modifying the app so that vibe-coded outputs would be previewed in a web browser rather than executed inside the app itself; Apple blocked that update and removed the app regardless.
An Apple spokesperson told The Information that the company was not targeting vibe coding as a category but rather enforcing guidelines that prevent apps from changing their behaviour after review. The distinction, in practice, is narrow: the defining capability of a vibe coding app is its ability to generate and run new functionality on demand, which is precisely what Guideline 2.5.2 prohibits.
The counterargument
Critics of Apple’s position have been pointed. A CNBC column published at the end of March 2026 argued that Apple’s crackdown “puts it on the wrong side of history,” contending that the review-based model was conceived for a world that no longer exists and that blocking vibe coding apps disadvantages the platform against Android, which applies fewer constraints on dynamic code execution.
The deeper tension is one of gatekeeping economics. Apple’s App Store review process is not only a safety mechanism: it is the basis of the 15–30% commission the company collects on in-app purchases and subscriptions. A wave of vibe-coded apps that bypass review , by generating code outside the approved bundle, is also, in structural terms, a challenge to the business logic of the store itself. Regulators in Europe have been scrutinising Apple’s App Store gatekeeping under the Digital Markets Act, and the vibe coding dispute adds another dimension to that ongoing examination.
A platform reckoning
What vibe coding has exposed is a mismatch between the speed at which AI can generate software and the speed at which existing review infrastructure can evaluate it. Apple reviewed roughly 200,000 weekly app submissions at the height of its 2025 volume, and the surge has outpaced that capacity. The platform now faces a choice between expanding its review capacity significantly, updating its guidelines to accommodate dynamic code execution in controlled ways, or continuing to enforce existing rules and accepting the friction that creates with a rapidly growing class of developers.
The capital being deployed into AI infrastructure in 2026 makes it unlikely that the volume of vibe-coded apps will slow on its own. The tools are becoming faster and cheaper; the category is producing some of the highest-growth companies in the technology industry. As AI moves from novelty to commercial infrastructure, the question of who controls the distribution layer, and on what terms, is becoming the central battleground of the platform era. Apple built the App Store as an answer to that question. Vibe coding is making it ask the question again from the beginning. The AI acceleration of 2025 has arrived at the gate. Apple is deciding whether to open it.
Tech
Snap acquires assets from Rec Room as social gaming platform announces shutdown

Snap Inc. confirmed late Monday that it has acquired select assets from Rec Room Inc., following the news that the Seattle-based company plans to shut down its longtime social gaming platform.
Some of Rec Room’s employees will be joining Snap — but this does not appear to signal that Rec Room will be resurrected at Snap, at least not in its current form. Specifically, the Rec Room employees will work at Specs Inc., the Snap hardware subsidiary, to support its Specs eyewear and augmented reality initiatives, the California-based company said in response to an inquiry from GeekWire.
Snap, the parent company of Snapchat, has maintained a Seattle engineering center since 2015. The company did not disclose deal terms, pricing, or other details of the asset sale.
It’s not clear how many Rec Room employees will be hired as part of the transition.
The move comes in advance of the expected launch later this year of Specs, Snap’s next-generation glasses. Snap has been betting on high-tech glasses for years, and in January established Specs Inc. as a wholly owned subsidiary to focus on its Spectacles/Specs hardware business.
Snap said it was impressed with the Rec Room team’s expertise in building social, multiplayer XR experiences, the industry term for virtual reality, augmented reality, and mixed reality technologies.
Reached via phone, Nick Fajt, the Rec Room co-founder and CEO, said he was “very proud of the team,” thankful to the Rec Room community, and excited for what’s next.
Rec Room surprised the community earlier Monday with the news that it would shut down its platform on June 1, saying it had never found a way to make the business sustainably profitable despite reaching more than 150 million players since it was founded a decade ago.
“Our costs always ended up overwhelming the revenue we brought in,” it said, noting that recent changes in the VR market and broader challenges in gaming contributed to its decision.
Rec Room raised $294 million across six rounds of funding and was valued at $3.5 billion in 2021, making it part of a select group of Seattle-area startups to reach unicorn status.
PREVIOUSLY: Rec Room shutting down: Once valued at $3.5B, social gaming platform finds profits elusive
Tech
Today’s NYT Strands Hints, Answer and Help for April 6 #764
Looking for the most recent Strands answer? Click here for our daily Strands hints, as well as our daily answers and hints for The New York Times Mini Crossword, Wordle, Connections and Connections: Sports Edition puzzles.
Today’s NYT Strands puzzle features some complicated words. Some of the answers are difficult to unscramble, so if you need hints and answers, read on.
I go into depth about the rules for Strands in this story.
If you’re looking for today’s Wordle, Connections and Mini Crossword answers, you can visit CNET’s NYT puzzle hints page.
Read more: NYT Connections Turns 1: These Are the 5 Toughest Puzzles So Far
Hint for today’s Strands puzzle
Today’s Strands theme is: Fringe group
If that doesn’t help you, here’s a clue: Almost on the outside.
Clue words to unlock in-game hints
Your goal is to find hidden words that fit the puzzle’s theme. If you’re stuck, find any words you can. Every time you find three words of four letters or more, Strands will reveal one of the theme words. These are the words I used to get those hints but any words of four or more letters that you find will work:
- DARK, RAGE, MORE, NEST, DINE, DINES, DINER, DINERS
Answers for today’s Strands puzzle
These are the answers that tie into the theme. The goal of the puzzle is to find them all, including the spangram, a theme word that reaches from one side of the puzzle to the other. When you have all of them (I originally thought there were always eight but learned that the number can vary), every letter on the board will be used. Here are the nonspangram answers:
- EDGE, BRINK, BOUNDARY, VERGE, MARGIN, EXTREMITY
Today’s Strands spangram
The completed NYT Strands puzzle for April 6, 2026.
Today’s Strands spangram is OUTERLIMITS. To find it, look for the O that is six letters down on the far-left row, and wind down, over, and up.
Tech
Traffic violation scams switch to QR codes in new phishing texts
Scammers are sending fake “Notice of Default” traffic violation text messages impersonating state courts across the U.S., pressuring recipients to scan a QR code that leads to a phishing site demanding a $6.99 payment while stealing personal and financial information.
This is a new variation of the widely sent toll violation and unpaid parking ticket scams that users received in 2025, which claimed to be from state toll agencies.
This new campaign started a few weeks ago, with someone sharing a text targeting New York residents with BleepingComputer, and many other people reporting similar texts online for other states, including California, North Carolina, Illinois, Virginia, Texas, Connecticut, and New Jersey.
Unlike the previous campaign, which included a text message and links to phishing sites, this new variation instead includes an image of an alleged court notice with an embedded QR code.
“This notice constitutes a final and urgent warning regarding an outstanding traffic violation involving your registered vehicle within the State of New York,” reads the fake court notice.
“This matter has now entered the formal enforcement stage.”

Source: BleepingComputer
The text message shared with BleepingComputer claims to be from the “Criminal Court of the City of New York”, stating that there is an unpaid parking or toll violation that must be paid immediately or the person must appear in court. Included are instructions to scan a QR code to settle the unpaid balances.
Scanning the QR code brings the targeted person to an intermediary site that first prompts you to solve a captcha to prove you are human. The QR codes and CAPTCHA are used to make it harder for automated security software and researchers to analyze the phishing campaign.
Solving the CAPTCHA redirects you to another phishing site that impersonates the state’s DMV or another agency, claiming there is an unpaid toll or parking ticket. In all examples seen by BleepingComputer, this outstanding balance is $6.99.
For example, phishing sites that impersonate the New York DMV use the hostname “ny.gov-skd[.]org” or “ny.ofkhv[.]life”.

Source: BleepingComputer
Clicking continue will take you to a page where you can enter your personal and credit card information to pay the alleged charge.
This form is used to steal your data, including your name, address, phone number, email address, and, eventually, your credit card information.
This information can then be used for a wide variety of malicious activities, including follow-on phishing attacks, financial fraud, identity theft, and the sale of your data to other threat actors.
As a general rule, if you receive a text from an unknown phone number or email address requesting payment of a bill, ignore it.
State agencies have repeatedly stated in response to these scams that they do not use text messages requesting personal information or payment information.
Tech
The latest on the Artemis II mission to the moon, and more science stories
We got to share in a rare moment of collective awe this week as four astronauts blasted off toward the moon, beginning a 10-day journey that will take them farther from Earth than any humans have traveled in the last 50 years. It’ll still be a little while before they reach their destination — the Orion spacecraft is expected to loop around the moon on Monday — but they’ve already seen some pretty incredible stuff on the way there. Here’s the latest on the Artemis II mission, and other interesting science stories from this week.
Artemis II crosses the halfway point
After years of planning, NASA astronauts Reid Wiseman, Victor Glover and Christina Koch, along with Canadian Space Agency (CSA) astronaut Jeremy Hansen, are finally on their way to the moon for the Artemis II mission. This test flight is a crucial step in NASA’s plans to send humans to the surface of the moon again for the first time since Apollo 17, and the high-stakes launch went off without a hitch on Wednesday.
The Artemis II crew is now more than halfway to the moon, according to NASA. When Orion reaches the moon on April 6, the astronauts will have a six-hour window of opportunity to observe the partially lit lunar far side, which can’t be seen from Earth. If you’re curious about where exactly the astronauts are at any given moment, you can track the mission by visiting NASA’s Artemis Real-Time Orbit website. And, if you just want to see what space looks like from Orion, here’s a livestream from outside the capsule. The moon is now in view!
The crew did experience some technical difficulties after leaving the ground, though all were resolved fairly quickly. Early Thursday morning, Wiseman contacted mission control to troubleshoot some issues with a Surface Pro he was attempting to use, noting, “I have two Microsoft Outlooks and neither one of those are working.” Relatable. The Artemis II crew was also greeted by a malfunctioning toilet not long into the flight, and astronaut Koch had to work with the ground team to figure out a fix — which they thankfully were able to do. In a livestream later, the astronaut joked that she is now a space plumber.
Small issues aside, the Artemis II mission is off to a pretty amazing start. The Orion spacecraft completed its translunar injection burn on Thursday, officially taking it out of Earth orbit and putting it on its way to the moon. Commander Wiseman shared some pictures of the view from Orion’s windows afterward, and they are breathtaking. In one unbelievably crisp shot of Earth, you can even see two auroras. And there’s plenty more observations to come.
Students discover a nearly pristine ancient star
Using data from the Sloan Digital Sky Survey (SDSS), a group of undergraduate students at the University of Chicago has discovered what’s thought to be one of the oldest stars ever observed. Their analysis indicates that the star, called SDSSJ0715-7334, was born in the nearby Large Magellanic Cloud billions of years ago before eventually making its way to the Milky Way.
Vedant Chandra and the SDSS collaboration Background ESA/Gaia image, A. Moitinho, A. F. Silva, M. Barros, C. Barata, University of Lisbon; H. Savietto, Fork Research

The star was one of 77 that the students selected for closer observation after poring through the SDSS data in their “Field Course in Astrophysics” class, which is led by Professor Alex Ji, the deputy Project Scientist for SDSS-V. SDSS-V is an ongoing all-sky survey that’s mapping the Milky Way. After creating their list, they set out to observe the stars during a field trip to Carnegie Science’s Las Campanas Observatory in Chile, and honed in on SDSSJ0715-7334 on day two. The team found it’s made mostly of hydrogen and helium, with very little carbon and iron. In the paper published in the journal Nature Astronomy, the researchers note that this composition could be the product of a primordial supernova.
“This ancient immigrant gives us an unprecedented look at conditions in the early universe,” said Ji in a statement. Ji added, “The star has so little carbon that it suggests an early sprinkling of cosmic dust is responsible for making it. This formation pathway has only been seen once before.”
Before you go, be sure to check these stories out too:
Tech
Thinking Of Converting A Shipping Container Into A Garage? Here’s What To Know First
Shipping containers are a necessary component of nearly every cargo vessel that crosses the Atlantic. But while they’re built to hold a wide variety of goods, supplies, and a lot more, some people also use them for personal storage. If you’re one of these people and you’re thinking about converting a shipping container into a garage, there are some things you should know beforehand.
Shipping containers are durable and weather resistant, so you won’t have to worry about your vehicle. But a container is only about eight feet wide, which is a problem for bigger cars. Containers are customizable, so you can add doors, windows, and even insulation. But if you use one as-is, you’ll have to contend with a lack of ventilation due to its airtight construction. Plus, you’ll have to deal with temperature swings, because it heats up in the summer and cools down in the winter.
Shipping containers can be pricey, and once your start to modify it, costs only go up from there. However, they can be cheaper than building a garage from the ground up, and since they’re portable, you can move it later on if you need to. Additionally, once you get it in place, depending on what your plan is, you can have it ready in no time. Of course, unless you can haul it and set it up yourself, you’ll likely have to pay even more to a service to do it for you.
Understanding the laws regarding shipping container use
While there are pros and cons to using a shipping container as a garage, there are some legalities you should know as well. First off, whether or not you’re allowed to even use a container, is determined by your local zoning laws. These laws control how property can be used, as it may even be illegal to build a shed on your property without a permit. So you may need one before converting, or even placing, a container on your land.
But even if you pass the zoning requirements in your area, you may not be able to move forward. That’s because you may still need to get approval from local building departments in your area. These departments can use building codes to review your container to ensure it’s structurally solid, and it’s placed on a strong foundation. Then there’s how the container is classified, and whether or not it meets your local government’s requirements to be used as a garage. Finally, if you get all the paperwork in order and have the green light to move ahead, check with your homeowner’s association. They may have guidelines in place that can actually override what’s allowed by local law.
If you hit a roadblock, it’s not a good idea to go around the system. Setting up a container without proper approval can lead to fines. You may be ordered to stop work, or you may be forced to remove the container altogether.
-
NewsBeat3 days agoSteven Gerrard disagrees with Gary Neville over ‘shock’ Chelsea and Arsenal claim | Football
-
Business3 days agoNo Jackpot Winner and $194 Million Prize Rolls Over
-
Fashion2 days agoWeekend Open Thread: Spanx – Corporette.com
-
Entertainment7 days ago
Fans slam 'heartbreaking' Barbie Dream Fest convention debacle with 'cardboard cutout' experience
-
Crypto World4 days agoGold Price Prediction: Worst Month in 17 Years fo Save Haven Rock
-
Business5 hours agoThree Gulf funds agree to back Paramount’s $81 billion takeover of Warner, WSJ reports
-
Crypto World6 days ago
Dems press CFTC, ethics board on prediction-market insider trades
-
Sports1 day agoIndia men’s 4x400m and mixed 4x100m relay teams register big progress | Other Sports News
-
Business4 days agoLogin and Checkout Issues Spark Merchant Frustration
-
Tech6 days agoEE TV is using AI to help you find something to watch
-
Tech7 days agoApple will hide your email address from apps and websites, but not cops
-
Sports6 days agoTallest college basketball player ever, standing at 7-foot-9, entering transfer portal
-
Politics6 days agoShould Trump Be Scared Strait?
-
Tech6 days agoFlipsnack and the shift toward motion-first business content with living visuals
-
Fashion7 days agoThe Best Spring Trends of 2026
-
Tech6 days ago
Daily Deal: StackSkills Premium Annual Pass
-
Crypto World6 days agoU.S. rule change may open trillions in 401(k) funds to crypto
-
Sports6 days agoWomen’s hockey camp eyes fitness boost, tactics ahead of WC 2026 campaign | Other Sports News
-
Tech6 days agoHow to back up your iPhone & iPad to your Mac before something goes wrong
-
Politics7 days agoBBC slammed for ignoring author of The Fraud



You must be logged in to post a comment Login