Connect with us
DAPA Banner

Tech

Half of US Adults Who Use Social Media Want Better Labels on AI Posts, CNET Finds

Published

on

Anyone who’s scrolled social media lately knows that AI is everywhere. But we aren’t always great at spotting it when we see it. That’s a big problem, and our frustrations with AI are growing.

AI slop has infected every platform, from soulless images to bizarre videos and superficially literate text. The vast majority of US adults who use social media (94%) believe they encounter content that was created or altered by AI, but only 44% of US adults say they’re confident they can tell real photos and videos from AI-generated ones, according to an exclusive CNET survey.

Read more: AI Slop Is Destroying the Internet. These People Are Fighting to Save It.

Advertisement

There are a lot of different ways people are fighting back against AI content. Some solutions are focused on better labels for AI-created content, since it’s harder than ever to trust our eyes. Of the 2,443 respondents who use social media, half (51%) believed we need better AI labels online. Others (21%) believe there should be a total ban on AI-generated content on social media. Only a small group (11%) of respondents say they find AI content useful, informative or entertaining.

AI isn’t going anywhere, and it’s fundamentally reshaping the internet and our relationship with it. Our survey shows that we still have a long way to go to reckon with it.

Key findings

  • Most US adults who use social media (94%) believe they encounter AI content on social media, yet far fewer (44%) can confidently distinguish between real and fake images and videos.
  • Many US adults (72%) said they take action to determine if an image or video is real, but some don’t do anything, particularly among Boomers (36%) and Gen Xers (29%).
  • Half of US adults (51%) believe AI-generated and edited content needs better labeling. 
  • One in five (21%) believe AI content should be prohibited on social media, with no exceptions.

Watch this: AI Is Indistinguishable From Reality. How Do We Spot Fake Videos?

US adults don’t feel they can spot AI media

Seeing is no longer believing in the age of AI. Tools like OpenAI’s Sora video generator and Google’s Nano Banana image model can create hyperrealistic media, with chatbots smoothly assembling swaths of text that sound like a real person wrote them. 

So it’s understandable that a quarter (25%) of US adults say they aren’t confident in their ability to distinguish real images and videos from AI-generated ones. Older generations, including Boomers (40%) and Gen X (28%), are the least confident. If folks don’t have a ton of knowledge or exposure to AI, they’re likely to feel unsure about their ability to accurately spot AI.

Advertisement

People take action to verify content in different ways

AI’s ability to mimic real life makes it even more important to verify what we’re seeing online. Nearly three in four US adults (72%) said they take some form of action to determine whether an image or video is real when it piques their suspicions, with Gen Z being the most likely (84%) of the age groups to do so. The most obvious — and popular — method is closely inspecting the images and videos for visual cues or artifacts. Over half of US adults (60%) do this. 

But AI innovation is a double-edged sword; models have improved rapidly, eliminating the previous errors we used to rely on to spot AI-generated content. The em dash was never a reliable sign of AI, but extra fingers in images and continuity errors in videos were once prominent red flags. Newer AI models usually don’t make those pedestrian mistakes. So we all have to work a little bit harder to determine what’s real and what’s fake.

ai-slop-cnet-survey-actions-taken.png

You can look for discrepancies and labels to identify AI content.

Advertisement

Cole Kan/CNET/Getty Images

As visual indicators of AI disappear, other forms of verifying content are increasingly important. The next two most common methods are checking for labels or disclosures (30%) and searching for the content elsewhere online (25%), such as on news sites or through reverse image searches. Only 5% of respondents reported using a deepfake detection tool or website.

But 25% of US adults don’t do anything to determine if the content they’re seeing online is real. That lack of action is highest among Boomers (36%) and those in Gen X (29%). This is worrisome — we’ve already seen that AI is an effective tool for abuse and fraud. Understanding the origins of a post or piece of content is an important first step to navigating the internet, where anything could be falsified.

Half of US adults want better AI labels

Many people are working on solutions to deal with the onslaught of AI slop. Labeling is a major area of opportunity. Labeling relies on social media users to disclose that their post was made with the help of AI. This can also be done behind the scenes by social media platforms, but it’s somewhat difficult, which leads to haphazard results. That’s likely why 51% of US adults believe that we need better labeling on AI content, including deepfakes. Support was strongest among Millennials and Gen Z, at 56% and 55%, respectively.

Advertisement
attitudes-ai-slop-cnet-survey.png

Very few (11%) found AI content useful, informative or entertaining.

Cole Kan/CNET/Getty Images

Other solutions aim to control the flood of AI content shared on social media. All of the major platforms allow AI-generated content, as long as it doesn’t violate their general content guidelines — nothing illegal or abusive, for example. But some platforms have introduced tools to limit the amount of AI-generated content you see in your feeds; Pinterest rolled out its filters last year, while TikTok is still testing some of its own. The idea is to give every person the ability to permit or exclude AI-generated content from their feeds.

But 21% of respondents believe that AI content should be prohibited on social media altogether, no exceptions allowed. That number is highest among Gen Z at 25%. When asked if they believed AI content should be allowed but strictly regulated, 36% said yes. Those low percentages may be explained by the fact that only 11% find AI content provides meaningful value — that it’s entertaining, informative or useful — and that 28% say it provides little to no value.

How to limit AI content and spot potential deepfakes

Your best defense against being fooled by AI is to be eagle-eyed and trust your gut. If something is too weird, too shiny or too good to be true, it probably is. But there are other steps you can take, like using a deepfake detection tool. There are many options; I recommend starting with the Content Authenticity Initiative‘s tool, since it works with several different file types. 

Advertisement

You can also check out the account that shared the post for red flags. Many times, AI slop is shared by mass slop producers, and you’ll easily be able to see that in their feeds. They’ll be full of weird videos that don’t seem to have any continuity or similarities between them. You can also check to see if anyone you know is following them or if that account isn’t following anyone else (that’s a red flag). Spam posts or scammy links are also indications that the account isn’t legit.

If you want to limit the AI content you see in your social feeds, check out our guides for turning off or muting Meta AI in Instagram and Facebook and filtering out AI posts on Pinterest. If you do encounter slop, you can mark the post as something you’re not interested in, which should indicate to the algorithm that you don’t want to see more like it. Outside of social media, you can disable Apple Intelligence, the AI in Pixel and Galaxy phones and Gemini in Google Search, Gmail and Docs

Even if you do all this and still get occasionally fooled by AI, don’t feel too bad about it. There’s only so much we can do as individuals to fight the gushing tide of AI slop. We’re all likely to get it wrong sometimes. Until we have a universal system to effectively detect AI, we have to rely on the tools we have and our ability to educate each other on what we can do now.

Methodology

CNET commissioned YouGov Plc to conduct the survey. All figures, unless otherwise stated, are from YouGov Plc. The total sample size was 2,530 adults, of which 2,443 use social media. Fieldwork was undertaken Feb. 3 to 5, 2026. The survey was carried out online. The figures have been weighted and are representative of all US adults (aged 18 plus).

Advertisement

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Michi Debuts Prestige X430 Integrated Amplifier as Luxury Two-Channel Powerhouse

Published

on

Rotel’s Michi Prestige X430 Integrated Amplifier marks the latest expansion of its luxury Michi lineup, a range that has steadily positioned itself as a serious contender in the upper tier of two-channel hi-fi. While Rotel has long enjoyed a strong reputation for well-engineered audio components, the Michi sub-brand represents its push into a more ambitious category of design, performance, and pricing.

Over the past few years, the Michi family has evolved into one of the more compelling alternatives to established high-end integrated systems. That progression has included the Michi Q5 Transport DAC (2024), the Series 2 amplifiers and preamplifiers (2023), and our earlier review of the Michi X3 Integrated Amplifier (2022). Taken together, the lineup has demonstrated that Rotel is serious about competing well above its traditional price brackets.

More importantly, the Rotel Michi range has proven capable of going head-to-head with some very established names, including the Cambridge Audio Edge Series, flagship integrated amplifiers from Marantz, the upper tier of Naim Audio’s Uniti Series, and even entry-level systems from McIntosh. That is not a small claim in a category where heritage brands have dominated for decades.

With the introduction of the Rotel Michi Prestige X430 Integrated Amplifier, Rotel appears intent on pushing that challenge even further into territory normally reserved for some of the most recognizable names in high-end audio.

Advertisement

Michi: Rotel’s Luxury Hi-Fi Components

The Michi line is intended to provide a clear entry point into Rotel’s most ambitious high-end audio components, built around the same design discipline, power-supply architecture, and manufacturing standards expected from true reference-level audio. It represents the company’s effort to push beyond its traditional value-focused reputation and compete directly in the upper tier of two-channel hi-fi.

Drawing on more than 60 years of amplifier and circuit development, Michi models aim to deliver a thoroughly modern listening experience defined by effortless dynamics, exceptional clarity, and the kind of long-term reliability Rotel has built its reputation on. The easiest way to think about it is simple: Michi is Rotel operating without a price ceiling.

For 2026, the brand is expanding the Michi lineup with two new Prestige Series components: the Prestige Q430 CD Player ($3,999) and the Prestige X430 Stereo Integrated Amplifier ($4,999). The visually matched combo also establishes a new design ethos within the existing Michi lineup featuring an anodized aluminum faceplate, top cover, and knurled volume knob, along with a glass front panel and high-resolution color display.

Michi Prestige Q430 CD Player Stacked atop Prestige X430 Integrated Amplifier Front
Michi Prestige Q430 CD Player Stacked atop Michi Prestige X430 Integrated Amplifier

Michi Prestige X430 Integrated Amplifier Overview

Class A/B Power

The Michi Prestige X430 Integrated Amplifier is a high-power Class A/B design engineered to deliver 210 watts per channel into 8 ohms and 340 watts per channel into 4 ohms, providing the current and headroom required to properly control demanding loudspeakers.

Advertisement

The X430’s high-current output stage is supported by an exceptionally low-noise architecture built around an oversized custom toroidal transformer manufactured in-house by Rotel, paired with multi-stage voltage regulation that reduces noise and ripple throughout the circuit. Optimized power and signal paths further minimize distortion, helping the amplifier maintain stability, clarity, and dynamic authority even when driving more demanding loudspeaker loads.

The result is what Rotel describes as an “acoustically transparent silent background” which claims a lower noise floor that allows subtle musical details to emerge more clearly. In practical terms, that means quieter silences, improved separation between instruments, and a more convincing sense of depth and soundstage, qualities that tend to stand out most with vocals and acoustic recordings where spatial cues and microdetail matter most.

High-resolution DAC

The X430’s digital section includes a PC-USB input supporting PCM up to 32-bit/384 kHz and DSD256 (4x) via DoP, along with coaxial and optical S/PDIF inputs capable of handling LPCM signals up to 24-bit/192 kHz.

Advertisement. Scroll to continue reading.
Advertisement

For digital conversion, the amplifier employs the ESS SABRE ES9039Q2M DAC, chosen for its ability to preserve fine detail and micro-dynamics. The result is improved clarity, imaging, and sonic texture, with more air and space around instruments, cleaner transient edges without harshness, and greater realism when playing high-resolution streaming or other digital sources.

michi-x430-back

 HDMI ARC & Digital Audio Connections

The X430 provides a wide range of digital audio connections designed to integrate easily into modern two-channel systems.

HDMI ARC allows users to route television audio directly through the amplifier with a single cable while maintaining convenient volume control from the TV remote.

Additional coaxial, optical, and PC-USB inputs support connections to streamers, disc players, and computers. The result is a cleaner, more impactful TV and music setup without the complexity or clutter of a surround sound receiver.

Advertisement

Bluetooth

In addition to its physical inputs, the X430 also includes Bluetooth connectivity for streaming music from smartphones, tablets, and other compatible source devices. It supports the aptX HD and AAC codecs, enabling higher-quality wireless playback than standard Bluetooth.

It’s worth noting, however, that the amplifier does not support newer high-bitrate codecs such as aptX Lossless or LDAC. For listeners seeking the highest possible streaming quality, the X430’s Roon support and wired digital inputs remain the better path to extracting maximum performance from modern streaming sources.

System Flexibility

For both modern and legacy analog sources, Balanced XLR and RCA analog inputs are provided along with Moving Magnet Phono compatible inputs for connecting Turntables. 

In addition, dual subwoofer outputs and A/B speaker switching make the X430 easy to integrate and upgrade over time. 

Advertisement
michi-x430-internal

Premium Build

The X430 features a precision-machined chassis, a knurled aluminum volume control, and a large color display that immediately conveys the Michi brand’s premium design ethos. The display offers selectable VU meter and spectrum analyzer views, adding a visual element to the listening experience that many owners will appreciate every time the system powers up.

Physically, the X430 measures 431 x 148 x 422 mm (17 x 6 x 16.5 inches) with a front panel height of 132 mm (5.25 inches) and a net weight of 16.9 kg (37.3 lbs). Compared with the larger Michi Series 2 components, the X430 is smaller, lighter, and more compact, making it easier to integrate into a wider range of systems and furniture without sacrificing the brand’s signature build quality.

Comparison

Michi Prestige X430
(2026)
Michi X5 Series 2
(2023)
Michi X3 Series 2
(2023)
Product Type  Integrated Amplifier Integrated Amplifier Integrated Amplifier
Price $4,999 $7,999 $5,799
Amplifier Type Class A/B Class A/B Class A/B
Analog Inputs 3 x RCA
1 x XLR
1 x Phono (MM)
4 x RCA
1 x XLR
1 x Phono (MM/MC)
3 x RCA
1 x XLR
1 x Phono (MM)
Analog Outputs 2 x Preamp 
2 x Subwoofer 
2 x Preamp
2 x Subwoofer
2 x Preamp 
2 x Subwoofer
Speaker Outputs  A, B, A+B A, B, A+B A, B, A+B
Digital Inputs 3 Coaxial
3 x Toslink Optical
PC-USB (PCM 32-bit/384kHz.DSD 256/4X with DoP) 
1 x HDMI ARC
3 Coaxial
3 x Toslink Optical
PC-USB (PCM 32-bit/384kHz.DSD 256/4X with DoP) 
3 Coaxial
3 x Toslink Optical
PC-USB (PCM 32-bit/384kHz.DSD 256/4X with DoP) 
Bluetooth  aptX HD / AAC aptX HD / AAC aptX HD / AAC
Maximum Power Per Channel 340 watts @ 4 ohms 600 watts @ 4 ohms 350 watts @ 4 ohms
Continuous Power Per Channel 210 watts @ 8 ohms 350 watts @ 8 ohms 200 watts @ 8 ohms
Total Harmonic Distortion  < 0.03% < 0.009% < 0.008%
Intermodulation Distortion (60 Hz: 7 kHz, 4:1) < 0.03%   < 0.03% < 0.03%  
Frequency Response Phono Input:  20 Hz-20k Hz (+0 dB, -0.5 dB) 

Line Level Inputs: 10 Hz-100k Hz (+0 dB, -0.5 dB)

Advertisement
Phono Input:  20 Hz-20k Hz (+0 dB, -0.2 dB) 

Line Level Inputs: 10 Hz-100k Hz (+0 dB, -0.6 dB)

Phono Input:  20 Hz-20k Hz (+0 dB, +0.2 dB) 

Line Level Inputs: 10 Hz-100k Hz (+0 dB, -0.4 dB)

Advertisement
Damping Factor  (20 Hz – 20kHz, 8 ohms) 260 350 350 
Input Sensitivity / Impedance Phono Input (MM): 5.56 mV / 47k ohms 

Line Level Inputs (RCA): 356 mV / 100k ohms

Line Level Inputs (XLR): 743 mV / 50k ohms

Phono Input (MM) 5.7 mV / 47k ohms 
Advertisement

Phono Input (MC) 570 uV/ 100 ohms 
Line Level Inputs (RCA) 380 mV / 100k ohms

Line Level Inputs (XLR) 580 mV / 100k ohms

Phono Input (MM) 5.2 mV / 47k ohms 

Line Level Inputs (RCA) 40 mV / 100k ohms

Advertisement

Line Level Inputs (XLR) 540 mV / 100k ohms

Input Overload Phono Input (MM): 66 mV 

Line Level Inputs (RCA): 4V

Line Level Inputs (XLR): 10V

Advertisement
Phono Input (MM):197 mV  
 
Phono Input (MC) 19 mV

Line Level Inputs (RCA): 12.5 V 

Line Level Inputs (XLR): 12.5 V

Phono Input (MM):60 mV   
Advertisement

Line Level Inputs (RCA): 3.5 V 

Line Level Inputs (XLR): 5.5 V

Signal to Noise Ratio (IHF “A” weighted)  Phono Input (MM) > 80 dB 

Line Level Inputs (RCA) : > 105 dB

Advertisement

Line Level Inputs (XLR):   > 100 dB

Phono Input (MM, MC): > 80 dB 

Line Level Inputs (RCA) : > 102 dB

Phono Input (MM, MC): > 80 dB 
Advertisement

Line Level Inputs (RCA) : > 102 dB

Preamplifier Output Level / Impedance 1.92 V / 100 ohms  1 V / 470 ohms  1.9 V / 100 ohms
Tone Controls Bass:  ±10 dB at 100 Hz 
Treble: ±10 dB at 10k Hz
Bass:  ±10 dB at 100 Hz
Treble: ±10 dB at 10k Hz
Bass:  ±10 dB at 100 Hz 
Treble: ±10 dB at 10k Hz
Channel Separation Phono Input:  > 55 dB 
Line Level Inputs:  > 55 dB 
Phono Input:  > 65 dB 
Line Level Inputs:  > 65 dB 
Phono Input:  > 55 dB 
Line Level Inputs:  > 55 dB 
Frequency Response (Digital Section) 10 Hz – 20k Hz (+0,-0.4 dB, Max) 20 Hz – 20k Hz (+0,-0.4 dB, Max) 20 Hz – 20k Hz (+0,-0.4 dB, Max)
Signal-to-Noise Ratio (IHF “A” weighted) (Digital Section) > 110 dB  > 112 dB > 102 dB
Input Sensitivity/Impedance (Digital Section) 0 dBFs / 75 ohms 0 dBFs / 75 ohms 0 dBFs / 75 ohms
Preamplifier Output Level (Digital Section) 1.15 V (at -20 dB Volume Position) 1.2V (at -20 dB Volume Position) 1.3V (at -20 dB Volume Position)
Digital to Analog Converter (Digital Section) ESS ES9039Q2M DAC ESS SABRE ES9028PRO DAC ESS SABRE ES9028PRO DAC
Coaxial/Optical Digital Signals (Digital Section) SPDIF LPCM (up to 192kHz 24 bit) SPDIF LPCM (up to 192kHz 24 bit) SPDIF LPCM (up to 192kHz 24 bit)
PC-USB (Digital Section) (up to 384kHz 32-bit) 0 Driver installation required 

Support DSD (up to 4X, 11.2MHz) and DoP (up to 2X, 5.6MHz) 

Roon Tested

Advertisement
USB Audio Class 1.0 (up to 96kHz 24-bit)
USB Audio Class 2.0 (up to 384kHz 32-bit)* *Driver installation required

DSD and DoP support 

MQA and MQA Studio support 

Roon Tested

Advertisement
USB Audio Class 1.0 (up to 96kHz 24-bit)

USB Audio Class 2.0 (up to 384kHz 32-bit)* *Driver installation required

DSD and DoP support 

MQA and MQA Studio support 

Advertisement

Roon Tested

HDMI (Digital Section) Support CEC with the ARC function 
2-channel PCM only (up to 48 kHz, 24-bit)
N/A N/A
Control Wireless Remote, 12V Trigger In/Out, RS232 Wireless Remote, 12V Trigger In/Out, RS232 Wireless Remote, 12V Trigger In/Out, RS232
Power Requirements Europe  230 V, 50 Hz 

USA   120 V, 60 Hz

Europe  230 V, 50 Hz 
Advertisement

USA   120 V, 60 Hz

Europe  230 V, 50 Hz 

USA   120 V, 60 Hz

Power Consumption  520 watts 850 watts 500 watts
Standby Power Consumption Normal  < 0.5 watts 
Advertisement

Network Wakeup < 2 watts

Normal  < 0.5 watts 

Network Wakeup < 2 watts

Normal  < 0.5 watts 
Advertisement

Network Wakeup < 2 watts

BTU (4 ohms, 1/8th power)  1476 BTU/h 2194 BTU/h 1303 BTU/h
Dimensions (WHD)  431 x 148 x 422 mm 

17 x 6 x 16 1/2 ins

485 x 195 x 452 mm
Advertisement

19″ x 7 5/8″ x 17 3/4″

485 x 195 x 452 mm

19″ x 7 5/8″ x 17 3/4″

Front Panel Height 132 mm / 5 1/4 ins 177 mm / 7.” 132 mm / 5 1/4″
Weight (net) 16.9 kg, 37.3 lbs 43.8 kg, 96.56 lb 28.9 kg, 63.7 lb
Finish Black Black Black
michi-x430-top-angle

The Bottom Line

At $4,999, the Michi Prestige X430 Integrated Amplifier lands in an increasingly competitive segment of the high-end integrated amplifier market. What makes it stand out is the combination of serious Class A/B power (210W into 8 ohms), premium build quality, a modern ESS SABRE DAC, HDMI ARC connectivity, and Rotel’s in-house power supply design, all wrapped in a chassis that is smaller and lighter than the Michi X3 and X5 Series 2 models. In practical terms, it offers robust power delivery, clean industrial design, and a strong digital section at an approachable price point.

There are trade-offs. The X430 drops support for MQA decoding and Moving Coil phono cartridges, and its power output is slightly lower than the larger Michi integrated amplifiers. But the addition of HDMI ARC and a newer DAC platform makes it better aligned with how many listeners actually use their systems today—combining streaming sources, digital playback, and television audio in a single two-channel setup.

Advertisement
Advertisement. Scroll to continue reading.

The X430 is clearly aimed at listeners who want reference-level integrated amplifier performance without stepping into five-figure territory. It should appeal to owners of demanding loudspeakers, two-channel purists who still want strong digital connectivity, and anyone looking for a serious alternative to integrated amplifiers from Cambridge Audio’s Edge Series, Marantz’s top-tier models, the upper end of Naim’s Uniti lineup, or even entry-level McIntosh—all while staying just under the psychological $5,000 barrier.

Price & Availability

The Michi Prestige X430 Integrated Amplifier will initially be available through Authorized Dealers in North America beginning March 2026 for $4,999. Global availability is expected to follow early in the second quarter of 2026 with pricing set at €4,999 or £4,499.

For more information: rotel.com/product/x430

Advertisement

Source link

Continue Reading

Tech

OpenAI built a $180 billion charity. Will it do any good?

Published

on

When Sam Altman first told her that he’d never let OpenAI go corporate, that what he and his colleagues were building was too powerful to be driven by investors, Catherine Bracy more or less believed him.

The conversation took place in 2022, when Bracy, CEO and founder of the social mobility-focused nonprofit TechEquity, was interviewing Altman for a book she was writing about the dangers of venture capital. It was before Altman’s mysterious firing and unfiring a year later, after which he mostly stopped responding to Bracy’s texts.

And ever since then, OpenAI — which was initially founded as a nonprofit in 2015 to “advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return” — has been publicly trying to escape the confines of its charitable roots. Today, OpenAI contains both a corporate arm focused on building and selling AI and a nonprofit arm with a stated mission of ensuring that AI benefits people.

During the controversial process of trying to fully sever the two in 2024, OpenAI lost about half of its AI safety staffers and much of its senior leadership. That was followed by an intensified scrutiny from state attorneys general, nonprofit legal experts, competitor companies, effective altruists, Nobel Prize winners, vast swaths of California’s philanthropic community, and one of its original funders, Elon Musk. Different sides had different interests, but the overall argument was that shifting to a for-profit model would create a fiduciary duty to investors that would inherently clash with its original mission of safety and public benefit.

Advertisement

Is OpenAI’s new foundation a $180 billion distraction?

  • Last October, OpenAI agreed to make its nonprofit arm very rich. The OpenAI Foundation is now worth about $180 billion and it has two main objectives:
    • Helping the world adapt to and benefit from AI by giving money to charity.
    • Acting as a moral compass for OpenAI the company, especially when it comes to safety and security decisions.
  • The foundation has already given away about $40.5 million so far, a small fraction of the billions it plans to eventually donate. But critics see the donations as a distraction.
  • While OpenAI says its foundation has the final say on security and safety-related decisions, the company has come under scrutiny in recent months for striking a deal with the Pentagon, fighting against statewide AI legislation, and testing ads for free users.
  • Even if the foundation does eventually give away billions of dollars, it may never be enough to make up for what the public lost in allowing OpenAI to go corporate.

Nonetheless, OpenAI did finally strike a contortive restructuring deal last October. Essentially, the for-profit arm became what is known as a public benefit corporation (PBC), called the OpenAI Group. The original nonprofit became the OpenAI Foundation, which has a 26 percent stake currently worth $180 billion in the PBC, plus a sliver of exclusive legal control over certain major decisions.

One effect of the transition was that it essentially required OpenAI to put a number on what it owed the public for converting what had been a project for all humanity into something that most directly benefits the company’s investors. The resulting stake of the OpenAI Foundation is big enough to instantly make it one of the wealthiest charities in the country, or in OpenAI’s words, the “best-equipped nonprofit the world has ever seen.” On paper, at least, the foundation is now significantly richer than the entire country of Luxembourg. Even the Gates Foundation has only $77.6 billion in assets, less than half of what the OpenAI Foundation can draw from, though it’s important to note that most of the wealth of the OpenAI Foundation is locked in fairly illiquid shares within the still private company, which limits how quickly any money can be given away.

Still, its sheer size means that the OpenAI Foundation stands to eventually be a transformative presence on the philanthropic stage, one way or another. But while OpenAI says the foundation will eventually give out many billions of dollars in philanthropy to ensure that “artificial general intelligence benefits all of humanity,” it’s uncertain that a socially beneficial philanthropy can exist side by side with a company that is fighting an existential battle over who will dominate the AI industry.

“The unspoken truth here is that they’re never going to make a decision that is bad for the company,” Bracy said. “These two entities cannot live under the same roof” where “the mission is in control.” (Disclosure: Vox Media is one of several publishers that have signed partnership agreements with OpenAI. Our reporting remains editorially independent.)

Advertisement

The foundation’s first gifts came in the form of $40.5 million in no-strings-attached grants to over 200 community nonprofits, like churches, food banks, and afterschool programs. Notably, most grantees had little to no connection to AI or technology — and just as notably, several of these early grantees just so happen to be members of EyesOnOpenAI, a coalition of California nonprofits critical of OpenAI’s privatization that formed in 2025.

But there are signs the foundation will soon pivot into grantmaking that’s more obviously relevant to the company’s original charter, which aimed to ensure that the benefits of AI are broadly distributed while also prioritizing long-term safety in the technology’s development. On Feb. 19, OpenAI — the company, not the foundation — announced a $7.5 million grant in conjunction with Microsoft, Anthropic, Amazon, and other major tech companies for a new, international project aimed at researching how to make AI systems safer.

“The unspoken truth here is that they’re never going to make a decision that is bad for the company.”

— Catherine Bracy, TechEquity founder and CEO

The real questions around the OpenAI Foundation have less to do with how much it is giving and to whom than whether it is actually able to carry out its contractual oversight role. In theory, the foundation should be ensuring that OpenAI is the standard-bearer for ethical decision-making at the frontier of AI development. That would be a unique contribution to the field — and an embodiment of OpenAI’s original mission — that no amount of grantmaking could replace. Yet, a series of troubling recent decisions by the company hardly seems to bear out that vision.

Advertisement

OpenAI has begun its new corporate journey by debuting ads on its free tier service, firing an executive who raised safety concerns about a soon-to-come NSFW mode for ChatGPT on charges of sexual discrimination against a male colleague, and burning cash while its president funnels millions of dollars into Donald Trump’s super PAC. OpenAI President Greg Brockman has also teamed up with the private equity firm Andreessen Horowitz and Palantir’s co-founders to fund a $125 million super PAC aimed at promoting AI-friendly policies. Along with Google, xAI, and Anthropic, OpenAI has also come under scrutiny in recent weeks for its defense contracts with the Pentagon.

When OpenAI succeeded in its campaign to cede its foundational new technology from nonprofit control, it opened the door for many of these decisions. Even $180 billion in charity might not be enough to make up for the difference.

How OpenAI shed its nonprofit skin

Corporate charity is ubiquitous in the tech world, especially among the biggest players. Microsoft plans to donate $4 billion in cash and AI cloud technology to schools and nonprofits by 2030. Google gives away some $100 million annually, often to organizations focused on artificial intelligence and technology.

Advertisement

But from the beginning, OpenAI was different. Rather than making money and giving some of it to charity, OpenAI was the charity. It was founded as a nonprofit research lab with about $1 billion in start-up donations, mostly from tech titans like Altman, Brockman, and Elon Musk.

There are some structural advantages to being a charity. You can’t accept investments, but you can accept donations and you don’t have to pay most taxes. What’s more, in those early days, OpenAI’s stated mission — to build safe AI without the pressures of financial incentive — gave it a major boost when it came to recruitment for rarified talent. Machine learning prodigy Ilya Sutskever told Wired in 2016 that he chose to leave Google to become OpenAI’s chief scientist “to a very large extent, because of its mission.”

But there were limits to being a fully nonprofit entity. In pursuit of financing amid the rising computing costs of cutting-edge AI, OpenAI created its capped-profit subsidiary in 2019 to manage a new $1 billion investment from Microsoft. Three years later, ChatGPT took the world by storm. Sutskever, and other members of OpenAI’s board, tried and ultimately failed to oust Altman amid accusations of dishonesty in 2023. (Altman denied those accusations.) In 2024 — one year after Sutskever and other members of OpenAI’s board tried and ultimately failed to oust Altman amid accusations of dishonesty — the organization announced its intention to go fully corporate and splinter off the nonprofit into its own fully independent entity.

The transition to for-profit “just didn’t smell right,” said Orson Aguilar, head of LatinoProsperity, an economic justice nonprofit and Bracy’s co-leader at EyesOnOpenAI. He wasn’t alone: By early 2025, a dozen former OpenAI employees filed an amicus brief aimed at stopping the conversion because it would “fundamentally violate its mission.” And more than 60 nonprofit, philanthropy, and labor leaders, many of them based in OpenAI’s home state of California, agreed that the attempt to privatize felt unfair given the extent to which the company benefited from its tax-free status during its early development.

Advertisement

To grasp what this all means, try thinking of OpenAI’s for-profit arm as an angsty tween and the nonprofit as her well-meaning, but often powerless parent. For years, the tween had been allowed to do her own thing, but only within certain limits — she still had to do her homework and get home by a certain time. Now imagine, she’s sick of having a curfew. “Nobody else has one!” She still lives in her mother’s house, but she wants to follow her own rules.

That’s kind of what happened here. Up until now, OpenAI’s for-profit subsidiary had a capped-profit model, meaning there were limits on how much money investors could make. But this new deal paved the way for the for-profit to become a full-time corporate girlie, charitable bylaws be damned. And while OpenAI’s new public benefit corporation still technically exists under the original nonprofit’s control, it mostly follows its own rules. It can raise as much money as it wants and eventually, it will likely go public.

But California history did provide some hope that the public might at least get some meaningful benefit from the transition. Back in the 1990s, California’s branch of the health insurer Blue Cross Blue Shield — then a nonprofit called Blue Cross of California — decided to privatize. After some haggling with state regulators, the company agreed to forfeit all of its assets, worth $3.2 billion, to a pair of independent nonprofits in exchange for going private. The result was the California Endowment, which is now the state’s largest health foundation.

Many nonprofit leaders in California hoped that OpenAI, which is headquartered in the state, would strike a similar deal, ceding a majority of its assets to a fully independent nonprofit. And those assets were and are enormous.

Advertisement

Gary Mendoza, a former state official who oversaw the Blue Cross deal, estimated the OpenAI nonprofit’s rightful assets at over $250 billion, or half the company’s $500 billion worth. “Anything short of 50 percent,” he told the San Francisco Examiner last year, “is a missed opportunity.” And beyond money for the public, assuming the nonprofit kept its shares, it would add up to enough influence to really shape OpenAI’s corporate decision-making at a key moment for the future of artificial intelligence.

Given that the OpenAI Foundation ended up with little more than a quarter of the final company, this is obviously not what happened. But EyesOnOpenAI’s years-long lobbying effort was not a total bust. The criticism proved powerful enough that last May, OpenAI was forced to give up on an initial plan to restructure away its nonprofit assets into a new organization wholly disconnected from OpenAI, which would have left the nonprofit with no legal control over the for-profit arm.

On paper, the new deal includes some meaningful concessions. It contractually requires the nonprofit mission to come first on safety and security issues, with no regard to shareholder interests. The memorandum also calls on OpenAI to “mitigate risks to teens” specifically. It made the foundation the controlling shareholder of the corporation, affording it the right to appoint corporate directors and oversee critical decisions like a sale.

If OpenAI abided by all of its terms and eventually started giving away billions of dollars of philanthropy each year, then the world — or at least California, where many of OpenAI’s grants have been concentrated — could stand to greatly benefit from it.

Advertisement

Random acts of corporate kindness

And this brings us to the $40.5 million that OpenAI gave to over 200 nonprofits toward the end of last year.

Many of these charities applied to the grant with sophisticated ideas around how to help their communities integrate or adapt to AI, though they can ultimately use the grants however they see fit. Among them were public libraries, Boys and Girls Clubs, churches, food banks, and legal aid nonprofits. Coming at a moment when the majority of the country’s nonprofits face existential funding cuts, “it was just the perfect timing,” said Thomas Howard Jr, head of Kidznotes, a North Carolina nonprofit focused on music education that received $45,000 in OpenAI’s first round of grants.

“There’s nothing I’ve seen that gives me reassurance that they’ll catch the important safety issues when they come up — or that they’ll be doing a thorough investigation of the grantmaking opportunities.”

Advertisement

— Tyler Johnston, Midas Project executive director

So civil society’s fight over the OpenAI transition won at least enough concessions to help these worthy organizations and retain some semblance of nonprofit control over some of the for-profit’s activities. So why do so many people in the philanthropic community remain so negative about the foundation?

“I’m all for nonprofits getting money,” said Bracy, the head of TechEquity. “I don’t begrudge any organizations that took the money, but I don’t think it’s some indication that OpenAI is living up to the mission of the nonprofit.”

$40.5 million, of course, is only 0.02 percent of the OpenAI Foundation’s on-paper $180 billion windfall. How the foundation will eventually spend the other 99.98 percent remains to be seen, though the foundation has said that at least $25 billion will ultimately go to scientific research and what it’s calling “technical solutions for AI resilience.” The company plans to announce a second wave of grants directed at organizations using AI to work across issues like health in the coming months.

“We are doing the important work of engaging with experts, learning from communities, and shaping a point of view of where Foundation investments can make the greatest difference,” the OpenAI Foundation’s board of directors said in response to a request for clarity on where future funding will go. “We look forward to sharing more soon.”

Advertisement

But so far, critics remain skeptical. OpenAI has done little to prove that its newfound philanthropy is more than just “a smoke and mirrors show,” argued one member of the Coalition for AI Nonprofit Integrity (CANI) — a coalition composed largely of AI insiders, including former OpenAI employees, furiously opposed to the restructuring. He spoke on the condition of anonymity because he feared retaliation from OpenAI, which has accused CANI of being a front funded by Musk. (CANI has denied receiving any such funds — though not for lack of trying. If you scroll to the bottom of OpenTheft, a website created by CANI, you’ll find a direct plea to Musk for donations.)

A man holds up an anti-AI sign at a protest outside of OpenAI’s headquarters. The sign says uncontrollable, unalignable, unacceptable. Ban superintelligence.

Critics of OpenAI say the company is not doing enough to ensure its technology develops safely, regardless of how much its foundation gives to charity.
Wiktor Szymanowicz/Future Publishing via Getty Images

While a spokesperson for OpenAI said that the foundation is in the process of building a dedicated team, and has sought the input of both nonprofit leaders and experts in how society can adapt to AI, the company has yet to make any major staffing announcements for its grantmaking arm. For now, with the exception of Zico Kolter, the head of the nonprofit’s safety committee, the foundation board still shares the same members as the corporate board, including CEO Sam Altman. The idea is that these board members can put on different hats when meeting about nonprofit versus corporate priorities, asserting the foundation’s oversight when needed. But it has created the appearance of a conflict of interest.

When asked for mechanisms and examples for how the foundation has responded to situations where its mission conflicts with shareholder interests, given the overlapping board membership, the spokesperson said that OpenAI has conflict-of-interest policies and governance procedures in place to ensure its directors only consider the mission when they meet, as they regularly do, about nonprofit issues.

The company also said the foundation board constantly exercises its oversight role, including for all new major product releases, like the release of GPT‑5.3‑Codex, an advanced agentic coding model, last month. The AI watchdog group the Midas Project, a frequent thorn in OpenAI’s side, accused the company of violating safety standards, an allegation that OpenAI fervently denied.

Advertisement

In any case, since the OpenAI Foundation is not a separate entity with its own independent board, some critics have compared it to other feel-good corporate social responsibility ventures, like the McDonald’s Ronald McDonald House, Walmart’s healthy foods program, and Home Depot’s work with veterans.

Corporate social responsibility has its place, and it can do real good. But Bracy believes that based on the OpenAI Foundation’s structuring and how they’ve conducted their grantmaking so far, it will probably never fund anything “they see as a threat to the growth of the company,” said Bracy, despite the fact that the need for guardrails on unrestricted AI development featured prominently in the company’s original mission. “They’re going to do what’s best for the bottom line of the for-profit.”

Critics like Bracy also doubt the OpenAI Foundation’s other main prerogative, which is to govern all safety and ethics-related issues for the broader organization, including the responsibility to review new products.

“Instead of a vehicle to serve humanity, it’s become a vehicle to serve one individual and a few of his friends and investors.”

Advertisement

— Anonymous member of CANI

While the nonprofit and its mission do legally retain control over the OpenAI corporation — particularly when it comes to safety issues — that may add up to little, given that the OpenAI Foundation doesn’t seem to be an independently governed foundation. It is not, in fact, even technically a foundation, but a public charity, which means it is not required to pay out a certain percentage of its assets each year under IRS requirements.

And while the nonprofit retains significant oversight powers on paper — including the authority to halt AI releases it deems unsafe — in practice, critics say, it’s unclear whether it would ever use them.

Increasingly, OpenAI has also been wading into political lobbying efforts that seem at odds with its mission to promote long-term safety in AI development. When California lawmakers were debating SB 53, a law requiring transparency reports from leading AI companies, OpenAI lobbied against it. And the company has come under intense scrutiny in recent weeks for its contract with the Pentagon, which has blacklisted its rival company Anthropic for raising ethical concerns about the use of its technology.

Why the fight is not over

Advertisement

OpenAI’s new corporate arrangement is very, very new. It’s still possible that OpenAI’s grantmaking arm really does staff up, and the nonprofit builds an independent board that has the power to enforce hard ethical decisions for the company, even when it hurts investors’ returns.

“They have a lot of freedom to continue to do good,” said Tyler Johnston, executive director of the Midas Project, but that would require them to “actually shake things up” and “show that they’ve created the scaffolding that will enable them to actualize their mission.”

But so far, “there’s nothing I’ve seen that gives me reassurance that they’ll catch the important safety issues when they come up,” he said. “Or that they’ll be doing a thorough investigation of the grantmaking opportunities.”

If OpenAI does not abide by the terms of its new contract — if the company, for example, tries to thwart an attempt to roll back a dangerous new tool — then California’s attorney general does have the power to demand answers from the company, and in theory, revisit the agreement’s terms.

Advertisement

Beyond the agreement, there are a few quite public means by which OpenAI’s former lovers, skeptics, and nemeses are still trying to press rewind on the restructuring.

Chief among them is Elon Musk, OpenAI’s most prominent original donor and co-founder. In between trading embarrassing jabs with Altman on X, Musk took OpenAI to court last year over claims that he was “assiduously manipulated” into donating tens of millions of dollars to a nonprofit research lab that turned into an “opaque web of for-profit OpenAI affiliates.”

Elon Musk and Sam Altman speak on a panel together for Vanity Fair in 2015.

Elon Musk was a major early supporter of OpenAI a decade ago, when it was still a nonprofit lab. Now, he’s suing to get his donations back.
Michael Kovac/Getty Images for Vanity Fair

A judge has found enough cause for the case to proceed to trial this April. Musk is suing for up to $134 billion in damages, though OpenAI has told its investors that it believes it would only be on the hook for Musk’s $38 billion in original donations. OpenAI, for its part, has accused Musk of an “unlawful campaign of harassment.”

Meanwhile, CANI is still holding out hope that it can convince the people of California to vote for a hyperspecific ballot measure, the California Charitable Assets Protection Act, which could reverse the decision to allow OpenAI — or any other “organizations developing transformative technologies” — to go corporate.

Advertisement

“They’re cutting corners on safety because of the race to artificial general intelligence that they just want to win,” said the member of CANI. “Instead of a vehicle to serve humanity, it’s become a vehicle to serve one individual and a few of his friends and investors.”

So maybe the fight over OpenAI’s restructuring isn’t completely over — but it’s probably on its last legs. And if they continue on the same path, it’s unlikely that the public will ever really benefit in the way they ought to, given the charitable benefits OpenAI enjoyed in its early days. At the very least, $40.5 million is just not going to cut it. Even $180 billion might fall far short.

“I think it’s them saying, ‘Listen, I dare you to enforce this,’” said Bracy, who believes OpenAI is “banking on the fact that they’re worth almost a trillion dollars, and they have endless resources — and the state of California does not.”

Source link

Continue Reading

Tech

AT&T’s New App Bundles Mobile and Home Internet Along With an AI Assistant

Published

on

AT&T is releasing a new app on Wednesday, replacing the MyAT&T app previously used to manage account options for both mobile and broadband customers. It incorporates a new AI-based chat assistant, parental controls and more details about call and data usage.

Typically, an app release isn’t newsworthy on its own. But carriers’ apps are becoming the central way that people interact with their wireless and home internet services, from checking and paying their bills to troubleshooting connection problems. Verizon has enlisted Google Gemini for front-line support in its app, and T-Mobile uses its T-Life app to stay on top of weekly perks and even encourages potential customers to switch carriers.

AT&T’s new app — simply, if confusingly, called just AT&T — brings together its mobile and home internet features for what the company calls “converged” customers who subscribe to both. It also has a cleaner design and feels faster overall.

Advertisement

I tried a beta version of the app before launch, and one of the first things I noticed compared with the MyAT&T app was the removal of a long-standing annoyance. Sometimes when you’re looking up information, the app displays it in a web browser within the interface. I’m shown the right content, but it feels like I’ve been handed off to something else, which is disjointing.

“Our data shows that if there is friction in [customers’] experiences, people just drop off,” said Andrew Solmssen, assistant vice president of Digital Customer Growth at AT&T. “So we worked really hard on” the design and performance.

AI-powered converged assistant

The new AT&T app includes the buttons and menus you’d expect to navigate to view your bill, explore other plans and services, and shop for phones and accessories. But Solmssen said the development teams recognized that those structures don’t work for everyone, which is why a major new feature is a generative-AI assistant named Andi.

“We’re finding in our testing that people find [these tasks] to be a little easier to do directly through a conversation,” Solmssen said. 

Advertisement

That also allows customers to change context without having to start over or navigate to a new section. If they’re checking whether an International Day Pass is available, for example, and then switch to wanting to know the day pass rates, it’s a matter of asking a follow-up question in the same chat, he said.

“The focus here is serving the customer in the best way that the customer wants to be served,” said Jeff Dixon, AT&T assistant vice president of Digital Product Management and Development.

The feature is built using components from licensed LLMs such as Google’s Gemini and OpenAI’s models. Customer data remains with AT&T and isn’t shared with outside companies. “Our data is all sequestered,” Dixon said. “There’s extensive red-teaming… [and] quite a lot of rigorous work just to make sure everything’s safe.”

In my limited testing with the beta app, getting information from the AI assistant was hit-or-miss. When I asked Andi how long it had been since I last used data on my Apple Watch, it showed me prices to buy a new watch. And when I asked it to recommend a plan for my account, it suggested the AT&T Unlimited Premium PL, which was retired last week in favor of the new Premium 2.0 plan.

Advertisement

I next asked it to compare Premium 2.0 with my current plan, but it couldn’t access it. So, in this interaction at least, it’s not pulling customer information into the conversation. But when I asked it to compare the Unlimited Elite plan with the Premium 2.0 plan, it gave me bulleted lists of features and a numbered summary of their differences.

I thought my expectations might be too high, but I realized they aren’t really my expectations: Chatbots like this are meant to be conversational to give you an experience more like talking to a real person. If I walked into an AT&T store and chatted up one of the employees, they could pull up my account and answer questions with that information at hand.

“It’s early enough days that we’re going to have to see how customers use it, how customers like it,” Solmssen said, adding that it still includes the option of going into a store to work with an AT&T representative or contacting phone support.

The new AT&T app has an AI-based chat (left) and controls for pausing devices or groups of devices (right).

AT&T/Screenshot by Jeff Carlson

Parental controls, detailed data and improved messages

Another new feature in the app lets you pause devices or sets of devices connected to your accounts. In the example Solmssen gave, if parents want to ensure some time away from phones during dinner or a family activity, they can pause each device for 30 minutes, 2 hours or 24 hours. That can be done on an individual level or in a group that includes each kid’s phone. While taking a phone time-out for family dinner is a benign scenario, others — including parental control that temporarily turns off kids’ phones wherever they are — could be overbearing.

If the family is a converged customer with both mobile and home internet on the same account, they can also pause Wi-Fi access to the devices using the same feature.

Advertisement

Groups can also be set up with downtime schedules, such as being offline during hours when the kids (or even the parents) should be sleeping.

A couple of other features stand out. The app shows more detailed usage statistics, such as for data being used by each device on the account, calls and texts and hotspot data.

“Even customers who are on unlimited wireless and unlimited internet are really curious about the data they’re using,” Solmssen said. “Being able to see that your child’s devices were using a ton of data at 4 a.m. is incredibly valuable.”

AT&T has also cleaned up the Messages interface. Hopefully, this means no more notifications that show up and then disappear into the ether if you dismiss them before reading.

Advertisement

The app is available to download now, and is also being rolled out gradually over the next few weeks to customers who have automatic app updates enabled on their iPhone or Android phones.

Source link

Advertisement
Continue Reading

Tech

Today’s NYT Connections Hints, Answers for March 18 #1011

Published

on

Looking for the most recent Connections answers? Click here for today’s Connections hints, as well as our daily answers and hints for The New York Times Mini Crossword, Wordle, Connections: Sports Edition and Strands puzzles.


Today’s NYT Connections puzzle is pretty tricky, but musicians might find the blue group easy. Read on for clues and today’s Connections answers.

The Times has a Connections Bot, like the one for Wordle. Go there after you play to receive a numeric score and to have the program analyze your answers. Players who are registered with the Times Games section can now nerd out by following their progress, including the number of puzzles completed, win rate, number of times they nabbed a perfect score and their win streak.

Advertisement

Read more: Hints, Tips and Strategies to Help You Win at NYT Connections Every Time

Hints for today’s Connections groups

Here are four hints for the groupings in today’s Connections puzzle, ranked from the easiest yellow group to the tough (and sometimes bizarre) purple group.

Yellow group hint: Time between two things, maybe.

Advertisement

Green group hint: That smarts!

Blue group hint: Rockers know these well.

Purple group hint: You might write one out to pay a bill.

Answers for today’s Connections groups

Yellow group: Interval.

Advertisement

Green group: React to a stubbed toe.

Blue group: Guitar effects pedals.

Purple group: ____ check.

Read more: Wordle Cheat Sheet: Here Are the Most Popular Letters Used in English Words

Advertisement

What are today’s Connections answers?

completed NYT Connections puzzle for March 18, 2026

The completed NYT Connections puzzle for March 18, 2026.

NYT/Screenshot by CNET

The yellow words in today’s Connections

The theme is interval. The four answers are patch, period, spell and stretch.

The green words in today’s Connections

The theme is react to a stubbed toe. The four answers are curse, hop, wince and yell.

Advertisement

The blue words in today’s Connections

The theme is guitar effects pedals. The four answers are delay, reverb, wah and whammy.

The purple words in today’s Connections

The theme is ____ check. The four answers are blank, coat, rain and reality.

Toughest Connections puzzles

We’ve made a note of some of the toughest Connections puzzles so far. Maybe they’ll help you see patterns in future puzzles.

#5: Included “things you can set,” such as mood, record, table and volleyball.

Advertisement

#4: Included “one in a dozen,” such as egg, juror, month and rose.

#3: Included “streets on screen,” such as Elm, Fear, Jump and Sesame.

#2: Included “power ___” such as nap, plant, Ranger and trip.

#1: Included “things that can run,” such as candidate, faucet, mascara and nose.

Advertisement

Source link

Continue Reading

Tech

Growing a Giant Crystal From Sugar Alone Takes Patience and a Few Kitchen Basics

Published

on

Growing Giant Sugar Crystal Experiment
Sugar has more power than most people realize, as Chase at Crystalverse demonstrates just how far a single bag from the grocery store can go when used correctly. What begins as regular grains becomes a single brilliant crystal large enough to hold in your palm and appreciate from all sides.



Chase set out to solve the common problem with sugar crystals, which tend to clump into messy groups instead of making a single clear piece. His method is based on coarse sugar throughout and a cautious seed stage that requires several attempts before success. The experiment takes approximately a week, but each stage builds on the previous one in such a way that you’ll want to check in on it every day.


National Geographic Break Open 10 Premium Geodes – Includes Goggles & 2 Display Stands – Great STEM…
  • DISCOVER CRYSTAL TREASURE – Break open these rocks to reveal amazing crystals inside! Geology doesn’t get more exciting than breaking open rocks and…
  • COLORFUL VARIETY – Geodes can form with a wide variety of crystals inside. These geodes have been hand-selected to bring you the best variety…
  • 100% NATURAL GEODES – These real specimens may be as big as a tennis ball, or as small as a ping-pong ball. Kids will love the thrill of cracking…

Gather the ingredients first, as coarse sugar is ideal for both the solution and the beginning seeds. Water, a digital scale, an electric burner or stove, a 600-milliliter beaker or other heat-safe container, 0.14-millimeter-thick clear nylon fishing line, precision tweezers, a small plastic petri dish measuring 90 by 15 millimeters, a few pipettes, and one large jar with a lid or plastic wrap for covering. Nothing special appears on the list, making the entire setup seem quite manageable.

Growing Giant Sugar Crystal Experiment
Start by bringing 100 milliliters of water to a gentle heat for each batch you plan to make, then stir in 225 grams of coarse sugar until every last grain has dissolved and the liquid runs completely clear. What you end up with is a supersaturated solution, essentially water that is holding far more sugar than it normally would at room temperature. Take it off the heat straight away to avoid any burning or discoloration, then cover it and leave it to cool slowly so a crust doesn’t form on the surface. Make enough to fill both the small dish you will use for the seed crystals and the larger jar where the main growth will happen.

Growing Giant Sugar Crystal Experiment
Once the solution is ready, proceed to the seed stage. Pour a shallow layer of cooled syrup into the petri dish. Sprinkle a few coarse sugar grains across the surface. Cut a length of fishing line and lower one end into the liquid, making sure it touches the bottom rather than floating free. Secure the top end, perhaps by taping it to a stick set across the dish. Set aside the dish in a quiet area away from fans and drafts.

Growing Giant Sugar Crystal Experiment
Check the dish the following morning and you should find small crystals forming around the line and along the bottom. Often a whole cluster attaches itself to the line, which can work for chunkier results but won’t give you a true single crystal. Carefully lift the line out with tweezers and take a close look. If things look crowded or messy, rinse the line off, mix up a fresh batch of syrup, and start the dish again. Keep repeating until you have one or two clean crystals sitting firmly on the line with nothing else crowding around them. It takes a little patience, but getting this part right is what separates a proper single crystal from a rough chunk of rock candy.

Growing Giant Sugar Crystal Experiment
Once you have the successful seed, transfer it to the large jar and fill it with the remaining cooled syrup. Tie the fishing line to a pencil or stick set across the jar’s opening, allowing the seed crystal to hang freely in the liquid without contacting the sides or bottom. Cover the jar loosely with plastic wrap or a cover to reduce evaporation while allowing slow changes inside. Place the arrangement in a stable, shaded location with consistent temperature and few disruptions.

Growing Giant Sugar Crystal Experiment
Seven days later, the result is ready, as you can simply lift the line out and let the extra liquid drip off. Dry the crystal with a paper towel. What you grasp now is significantly greater than any single grain from the original bag, demonstrating the geometric structure that sugar naturally generates. The item is edible but dissolves easily in water or humid air, so keep it in a dry place if you want it to stay intact.
[Source>]

Advertisement

Source link

Continue Reading

Tech

Defense Department says Anthropic poses ‘unacceptable risk’ to national security

Published

on

The Department of Defense said giving Anthropic continued access to its warfighting infrastructure would “introduce unacceptable risk” to its supply chains in a court filing submitted in response to the AI company’s lawsuit. If you’ll recall, Anthropic sued the government to challenge the supply chain risk designation it received for refusing to allow its model to be used for mass surveillance and the development of autonomous weapons.

In its filing, the department explained that its secretary, Pete Hegseth, had a provision incorporated into AI service contracts, allowing the agency to use their technologies for any lawful purpose. Anthropic refused its terms and apparently, the company’s behavior caused the Pentagon to question whether it truly was a “trusted partner” that it could work with when it comes to “highly sensitive” initiatives. “After all, AI systems are acutely vulnerable to manipulation, and Anthropic could attempt to disable its technology or preemptively alter the behavior of its model either before or during ongoing warfighting operations, if Anthropic — in its discretion — feels that its corporate “red lines” are being crossed,” the Pentagon wrote in its filing. “DoW deemed that an unacceptable risk to national security,” it added, referring to the agency as the Department of War, which is the Trump administration’s preferred name for it.

It was due to those concerns that President Trump ordered federal agencies to stop using its technology, the filing reads. The company is asking the court to issue a preliminary injunction and put a pause on a ban while it’s challenging its supply chain risk designation in court. While Anthropic’s clients could continue working with the company on non-defense-related projects, it says the label could cause it to lose billions of dollars in revenue. It’s not quite clear if Anthropic is still trying to reach a new deal with the government, as was reported before it filed its lawsuit. As The New York Times notes, Microsoft, Google and OpenAI had filed friend-of-the-court briefs in support of Anthropic since then.

Source link

Advertisement
Continue Reading

Tech

Google Home update soups up Gemini and fixes frustrating papercuts

Published

on

Google is rolling out a fresh update for the Google Home app that makes Gemini a lot more useful in day-to-day use, while also addressing several small but frustrating issues that have been holding it back.

What’s new with Gemini for Home?

One of the biggest upgrades with this update is speed. Google says common smart home commands like turning lights on or off can now be up to 40 percent faster. That should make a noticeable difference for those who rely on voice controls throughout the day. Gemini’s Live Translation feature is also quicker and more responsive, and now supports Canadian French, taking the total number of supported languages to 30.

The update also focuses heavily on making responses less chatty. Instead of long confirmations, Gemini now keeps things short and direct. So a command like setting an alarm gets a simple “Alarm set for 9 AM” instead of a full sentence. It is a small change, but one that should make interactions feel smoother.

What else is changing with the latest update?

On the features front, Gemini is getting smarter with alarms and timers. Users can now set them based on real-world events, manage multiple actions in one go, and even ask about the original timer duration. Recurring alarms and proper snooze controls have also been fixed, addressing one of the main annoyances users had with Gemini for Home.

There are improvements beyond voice, too. Google is expanding Gemini for Home to more countries and introducing new automation options in the Google Home app. These include triggers tied to appliances like ovens and new lighting effects such as wake and sleep modes.

Advertisement

Individually, these updates are minor, but together they should make Gemini feel faster, more responsive, and much more reliable than before. The new release follows an update from earlier this month that also brought performance improvements and bug fixes for Gemini’s smart home voice controls.

Source link

Advertisement
Continue Reading

Tech

Prime Video Ultra Launches at $4.99 Per Month as Amazon Rebrands Ad Free Tier and the Streaming Price Creep Continues

Published

on

The streaming wars never slow down. They just find new ways to charge admission.

Starting April 10, 2026, Amazon will rename its existing Prime Video Ad Free tier as Prime Video Ultra, priced at $4.99 per month in the United States. The new tier adds several upgrades that Amazon clearly hopes will justify the new branding and the monthly fee: up to five concurrent streams instead of three, as many as 100 downloads instead of 25, and exclusive access to 4K and UHD streaming.

Amazon frames the change as a necessary step to support the cost of premium streaming. According to the company, delivering ad free video with higher-end features requires significant investment, and the new structure brings Prime Video more in line with the pricing models used by other major streaming services. In other words, welcome to the club.

For Prime members, the baseline Prime Video benefit remains intact. Subscribers will still receive HD and HDR streaming as part of the standard Prime membership, and Amazon says Dolby Vision support will now be included at no additional cost. The new Ultra tier simply stacks additional perks on top of the existing service for viewers who want more streams, more downloads, and access to the highest video resolution.

Advertisement

All of this arrives against a particularly chaotic backdrop in the streaming business. The recent bidding war involving Netflix and Paramount over the future of Warner Bros Discovery, CNN, and HBO MAX has already reshaped the landscape, with the Ellisons emerging victorious and the industry bracing for the fallout. One thing seems certain as the dust settles: none of these services are getting cheaper.

Amazon may have deeper pockets than most of its competitors, but it is not immune to the math. Producing blockbuster series and films at scale costs real money, and those glossy originals are not paying for themselves. Renaming the ad free tier Prime Video Ultra may sound like a cosmetic change, but the message behind it is clearer than ever.

The era of cheap streaming is over. The meter is running.

amazon-prime-video-home-2025-05-28

Amazon’s new Prime Video Ultra tier doesn’t replace the core Prime Video benefit included with a Prime membership. Instead, it layers premium streaming features on top of the existing service for viewers who want ad free playback, higher video resolution, and more flexibility for downloads and concurrent streams.

Advertisement

The chart below breaks down what stays included with Prime and what the new $4.99 per month Ultra tier adds starting April 10, 2026.

Feature / Option Prime Video Benefit (Included with Prime Membership) Prime Video Ultra Subscription
Content Library Thousands of premium movies, TV series, and live sports including NFL, NBA, NASCAR, and The Masters Same content library
HD (High Definition)
HDR (High Dynamic Range)
Dolby Vision ✔ Newly available
Offline Downloads Up to 50 downloads for offline viewing (increased from 25) Up to 100 downloads for offline viewing
Concurrent Streams Up to 4 simultaneous streams (increased from 3) Up to 5 simultaneous streams
Ad Free Viewing
4K UHD Video
Dolby Atmos Audio
Price Included with Prime membership ($14.99 per month or $139 per year) $4.99 per month starting April 10. Prime or Prime Video subscription required. Annual option $45.99 per year (about 23% savings vs monthly).

Access to Prime Originals, Movies, and Live Sports

amazon-prime-video-content-nba

Whether you stick with the Prime Video benefit included with a Prime membership or upgrade to Prime Video Ultra, the underlying content library does not change. Both options provide access to Amazon’s full catalog of Amazon MGM Studios originals, licensed films and series, and exclusive live sports programming.

That lineup includes popular Prime Original series such as FalloutReacherThe Boys, The Lord of the Rings: The Rings of Power, and The Summer I Turned Pretty. Amazon’s growing slate of original films is also included, with titles such as Heads of StateRed OneRoad House, and The Accountant 2.

Advertisement. Scroll to continue reading.

Live sports remain a major draw for the platform as well. Prime Video carries exclusive coverage and events tied to the NFL, NBA, WNBA, NASCAR, NWSL, and The Masters, alongside additional licensed programming and films.

Advertisement

In other words, Prime Video Ultra does not unlock additional content. The catalog remains the same. What the Ultra tier adds are premium viewing features such as ad free playback, higher video resolution, Dolby Atmos surround sound, and expanded streaming and download limits.

The Fine Print: What Prime Video Ultra Still Won’t Do

Before anyone assumes Prime Video Ultra is a magic “no ads, everything in 4K, watch it anywhere forever” button, there are a few realities worth noting.

First, Prime Video Ultra is currently limited to customers in the United States. If you’re outside the U.S., the “Ultra” experience will have to wait.

Second, ad free does not mean ad free everywhere. Live programming such as sports broadcasts, certain licensed content, and third party channel subscriptions may still contain advertising. That’s the nature of live television and licensing deals. Amazon can remove ads from its own playback environment, but it can’t rewrite every contract in the sports world.

Advertisement

Third, the improved download and concurrent stream limits apply to your entire account, not to each individual profile. So if five people in the household are streaming at once or loading devices with downloads before a trip, those limits are shared across everyone using the account. There may also be additional restrictions depending on the specific title, device, or content provider.

Finally, the premium tech perks come with the usual fine print. 4K UHD video, Dolby Vision, and Dolby Atmos are only available on supported titles and require compatible devices and enough internet bandwidth to actually deliver them. Not every movie or show in the catalog is available in every format.

The Bottom Line

Amazon’s Prime Video Ultra tier is less about new content and more about unlocking the premium viewing and audio experience. For $4.99 per month extra, subscribers get ad free playback, expanded streaming and download limits, and access to higher resolution 4K UHD video, along with Dolby Vision and Dolby Atmos surround sound.

Prime members who stick with the included Prime Video benefit will still get the same catalog of movies, series, and live sports, but without the highest resolution formats or ad free viewing. However, this tier does get Dolby Vision added, which wasn’t included before, at no extra charge.

Advertisement

In the bigger picture, this move reflects where the streaming business is heading. As studios spend billions on original content and compete for sports rights, subscription tiers are becoming more segmented and more expensive. Prime Video Ultra is simply Amazon’s latest reminder that the era of cheap streaming is over.

Sign-up for Amazon Prime.

Source link

Advertisement
Continue Reading

Tech

Today’s NYT Strands Hints, Answer and Help for March 18 #745

Published

on

Looking for the most recent Strands answer? Click here for our daily Strands hints, as well as our daily answers and hints for The New York Times Mini Crossword, Wordle, Connections and Connections: Sports Edition puzzles.


Today’s NYT Strands puzzle is kind of bizarre. Even after I had found some of the answers, the theme didn’t click in my brain until I was almost done with the puzzle. And some of the answers are difficult to unscramble, so if you need hints and answers, read on.

I go into depth about the rules for Strands in this story

Advertisement

If you’re looking for today’s Wordle, Connections and Mini Crossword answers, you can visit CNET’s NYT puzzle hints page.

Read more: NYT Connections Turns 1: These Are the 5 Toughest Puzzles So Far

Hint for today’s Strands puzzle

Today’s Strands theme is: It follows.

Advertisement

If that doesn’t help you, here’s a clue: Not death…

Clue words to unlock in-game hints

Your goal is to find hidden words that fit the puzzle’s theme. If you’re stuck, find any words you can. Every time you find three words of four letters or more, Strands will reveal one of the theme words. These are the words I used to get those hints but any words of four or more letters that you find will work:

  • LEFT, COLE, HOLE, LACK, BILE, LEACH, SOLE, LOSE, LIFE, SEER, STEEL, STERN, FAIL

Answers for today’s Strands puzzle

These are the answers that tie into the theme. The goal of the puzzle is to find them all, including the spangram, a theme word that reaches from one side of the puzzle to the other. When you have all of them (I originally thought there were always eight but learned that the number can vary), every letter on the board will be used. Here are the nonspangram answers:

  • COACH, HACK, BLOOD, CYCLE, STYLE, LESSON, PRESERVER. (All words that can follow the word “LIFE.”)

Today’s Strands spangram

completed NYT Strands puzzle for March 18, 2026

The completed NYT Strands puzzle for March 18, 2026.

NYT/Screenshot by CNET

Today’s Strands spangram is AFTERLIFE. To find it, start with the A that is the furthest-left letter on the top row, and wind down.

Advertisement

Toughest Strands puzzles

Here are some of the Strands topics I’ve found to be the toughest.

#1: Dated slang. Maybe you didn’t even use this lingo when it was cool. Toughest word: PHAT.

#2: Thar she blows! I guess marine biologists might ace this one. Toughest word: BALEEN or RIGHT. 

#3: Off the hook. Again, it helps to know a lot about sea creatures. Sorry, Charlie. Toughest word: BIGEYE or SKIPJACK.

Advertisement

Source link

Continue Reading

Tech

A Quantum Leap for the Turing Award

Published

on

Today it’s widely acknowledged that the future of computing will involve the quantum realm. Companies like Google, Microsoft, IBM, and a few well-funded startups are frantically building quantum computers and routinely claiming advances that seem to bring this exotic, world-changing technology within reach. In 1979 all of this was unthinkable. But that summer, two scientists met in the Atlantic Ocean off the coast of Puerto Rico, and their aquatic conversation led to a body of work that created quantum information theory. In a larger sense, their contributions helped bring computer science into the quantum age.

Those water-logged scientists, Charles Bennett and Gilles Brassard, are now the latest recipients of the ACM A.M. Turing Award, the Nobel Prize of the field.

Until that 1979 meeting, there had been a disconnect between information science and physics. The latter field had experienced a disruption in the early 20th century when physicists discovered quantum mechanics, a deeper explanation of how the universe operated that superseded the classical physics of Issac Newton. Computer science, however, didn’t account for the quantum world, except for having to deal with its effects on tiny chips, where the behavior of electrons were relevant.

“In the 1950s through the 1980s people thought of quantum effects as occurring in very small things and as a source of noise—you had to understand quantum theory to build transistors,” explains Bennett. “People thought of quantum mechanics as a nuisance.” He and Broussard discovered methods—like quantum coin-tossing and quantum entanglement—that turned the perceived handicaps of quantum reality into a powerful tool.

Advertisement

At the time of their meeting, Bennett was at a career crossroads; he’d joined IBM in 1973, but had taken a years-long break from academic publishing. One source of continuing fascination was an idea shared by a college classmate, Steven Weisner—that using a quantum form of cryptography could enable digital money that could not be counterfeited. (Yep, Weisner envisioned cryptocurrency in the late 1960s!) At the 1979 conference, Bennett saw that a cryptographer named Brassard was in attendance—he had just completed a dissertation on public-key crypto—and located him offshore.

“So there I was swimming in the beach when a complete stranger came up to me and started telling me that a friend of his found that we can use quantum mechanics to make affordable banking notes out of nowhere,” Broussard tells me. “If I had been on firm ground, I would have run for my life, but I was trapped in the ocean, so I listened politely.” Though Brassard had no previous interest in physics, he was intrigued by the approach, and the pair eventually published a theory called BB84, essentially creating an alternative to classic public-key cryptography based on what would become quantum information theory. Suddenly, the world of the quantum became a source of solutions—if scientists could invent the mechanisms to make it happen. As Yannis Ioannidis—president of ACM, which bestows the Turing Award—put it in a statement, “Bennett and Brassard fundamentally changed our understanding of information itself.”

Both scientists take pains to say that their original work did not lead directly to the current scramble to build quantum computers. Bennett notes that in a 1981 conference at MIT, legendary physicist Richard Feynman “made the point that, since nature is quantum, probably some computational jobs would need to be done by a quantum computer.” He also credits physicist David Deutsch for key ideas about quantum computers. Bennett and Brassard became part of that effort.

“Quantum computing was invented independently from us, but then we jumped in,” says Brassard. “I was the first person to design a quantum circuit to do quantum teleportation.” Brassard and Bennett’s work on teleportation, while still in an experimental stage, is now part of the quantum lore. Brassard has said that “one day, it will fuel the quantum internet.”

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025