Connect with us

Tech

Today’s NYT Mini Crossword Answers for March 11

Published

on

Looking for the most recent Mini Crossword answer? Click here for today’s Mini Crossword hints, as well as our daily answers and hints for The New York Times Wordle, Strands, Connections and Connections: Sports Edition puzzles.


Need some help with today’s Mini Crossword? I thought it was a bit tricky. 1-Down is one of those old-fashioned comic-book sounds that I had to remember how to spell correctly. Read on for all the answers. And if you could use some hints and guidance for daily solving, check out our Mini Crossword tips.

If you’re looking for today’s Wordle, Connections, Connections: Sports Edition and Strands answers, you can visit CNET’s NYT puzzle hints page.

Advertisement

Read more: Tips and Tricks for Solving The New York Times Mini Crossword

Let’s get to those Mini Crossword clues and answers.

completed-nyt-mini-crossword-puzzle-for-march-11-2026.png

The completed NYT Mini Crossword puzzle for March 11, 2026.

Advertisement

NYT/Screenshot by CNET

Mini across clues and answers

1A clue: Study of the human mind, informally
Answer: PSYCH

6A clue: Common fixture in a gym bathroom
Answer: SCALE

7A clue: Kinda boring
Answer: HOHUM

Advertisement

8A clue: Like a commenter without a username, for short
Answer: ANON

9A clue: “All good between us?”
Answer: WEOK

Mini down clues and answers

1D clue: Old-fashioned “Yeah, right!”
Answer: PSHAW

2D clue: Coffeehouse pastry
Answer: SCONE

Advertisement

3D clue: Google alternative
Answer: YAHOO

4D clue: Sound of a dull thump
Answer: CLUNK

5D clue: Line on the bottom of a pant leg
Answer: HEM

Advertisement

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Listeners rated a Chinese startup’s AI voices more realistic and trustworthy than those from Microsoft, Google, and Amazon

Published

on


A new global study suggests people stop trusting AI voices the moment they realize the voice isn’t human, which creates a big problem for companies that use synthetic voices in customer service and other public-facing systems.

Source link

Advertisement
Continue Reading

Tech

After Outages, Amazon To Make Senior Engineers Sign Off On AI-Assisted Changes

Published

on

An anonymous reader quotes a report from the Financial Times: Amazon’s ecommerce business has summoned a large group of engineers to a meeting on Tuesday for a “deep dive” into a spate of outages, including incidents tied to the use of AI coding tools. The online retail giant said there had been a “trend of incidents” in recent months, characterized by a “high blast radius” and “Gen-AI assisted changes” among other factors, according to a briefing note for the meeting seen by the FT. Under “contributing factors” the note included “novel GenAI usage for which best practices and safeguards are not yet fully established.”

“Folks, as you likely know, the availability of the site and related infrastructure has not been good recently,” Dave Treadwell, a senior vice-president at the group, told employees in an email, also seen by the FT. The note ahead of Tuesday’s meeting did not specify which particular incidents the group planned to discuss. […] Treadwell, a former Microsoft engineering executive, told employees that Amazon would focus its weekly “This Week in Stores Tech” (TWiST) meeting on a “deep dive into some of the issues that got us here as well as some short immediate term initiatives” the group hopes will limit future outages.

He asked staff to attend the meeting, which is normally optional. Junior and mid-level engineers will now require more senior engineers to sign off any AI-assisted changes, Treadwell added. Amazon said the review of website availability was “part of normal business” and it aims for continual improvement. “TWiST is our regular weekly operations meeting with a specific group of retail technology leaders and teams where we review operational performance across our store,” the company said.

Source link

Advertisement
Continue Reading

Tech

Garmin and Peloton devices now properly sync in both directions, giving you a more accurate idea of your daily fitness

Published

on


  • Peloton activities can now be synced with the Garmin app
  • For months, you could only sync data the other way
  • That should give you a better understanding of your health and fitness

If you use both Garmin and Peloton devices during your fitness activities, we’ve got some good news: the two companies’ products now sync together in both directions. And this makes it far easier to log your workouts and keep track of your pursuits than before.

That means if you record a workout with one of the best Garmin watches, it’ll sync to the Peloton app. And if you log a session on a Peloton device, it’ll arrive in your Garmin app too. Whatever your equipment setup, the two systems should now communicate properly.

Source link

Continue Reading

Tech

Google brings Gemini in Chrome to India

Published

on

Google announced Wednesday that it is bringing Gemini integration for Chrome to new regions, including India, Canada, and New Zealand. The rollout will let users use Gemini in Chrome through a sidebar on desktop, enabling them to ask Google’s AI chatbot questions about the content on the screen, get information from their Gmail, Keep, Drive, and YouTube, and compare tab contents.

As part of the new rollout, Gemini will also support languages including Hindi, Bengali, Gujarati, Kannada, Malayalam, Marathi, Telugu, and Tamil, in addition to English and Chrome’s other newly supported languages.

Image Credits: Google

Google first introduced Gemini in Chrome in the U.S. through a floating window last September. The company introduced sidebar-based Gemini tools earlier this year.

Users who get access to this feature will see an “Ask Gemini” icon on the tab bar, which they can activate for any tab and ask questions, summarize content, or create a quiz to understand a topic. Google said that Gemini can also work across tabs. This means you can mention multiple tabs to get an answer, which is helpful when you are comparing items to shop for or tickets to buy for a trip.

Gemini can also connect with different tools, get your information, and give more personalized answers. It can connect to Gmail, Maps, Calendar, YouTube, and other Google apps for contextual answers, too. For instance, you can directly compose an email using Gemini in the sidebar on Chrome and send it to someone without leaving the window. You can also ask Gemini to summarize a YouTube video and list the main points alongside timestamp markers. The assistant can also schedule meetings or brief you about your day.

Advertisement

Users can also use Google’s Nano Banana 2 generative AI tool directly in Gemini for Chrome to transform images. For instance, you can upload a photo of your room while buying furniture and ask the assistant to transform the image to see how an item would look in the room.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

Advertisement
Image Credits: Google

The company said that, along with desktop, it is also rolling out Gemini support in Chrome for iOS in India. When available, the option will show up in the address bar through a page tools icon.

Image Credits: Google

Google in January launched increased agentic capabilities, which can take over your browser and complete tasks on your behalf, for U.S.-based AI Pro and AI Ultra users. The company is keeping this function out of the latest expansion for users in India, New Zealand, and Canada.

Source link

Continue Reading

Tech

Nvidia GDC 2026 roundup: More path-traced games, DLSS 4.5 debut titles, and RTX mega foliage

Published

on


Users with high-refresh-rate monitors can participate in an opt-in beta for DLSS 6x multi-frame generation and dynamic mode starting March 31. The date lands slightly ahead of Nvidia’s previously stated April launch, which may be when the two features exit beta.
Read Entire Article
Source link

Continue Reading

Tech

Nothing Phone (4a) and Phone (4a) Pro Compared: Key Differences Explained

Published

on

Nothing has announced two new mid-range smartphones, the Nothing Phone (4a) and the Nothing Phone (4a) Pro. The latest devices continue to follow the company’s design language, and the brand continues to shine in the competitive smartphone industry. Although the Pro version is similar to Phone (4a), it offers the following premium features: performance, design, and display. Below is a detailed comparison of the two versions in terms of design, display, camera, performance, and battery.

Price and Availability

The prices of the Nothing Phone (4a) and the Nothing Phone (4a) Pro indicate the level of difference between the two devices in terms of features and upgrades. The standard Nothing Phone (4a) starts at ₹31,999 for the 8GB RAM and 128GB internal storage variant. The device also offers the option to purchase the upgraded 8GB + 256GB and 12GB + 256GB variants.

The Nothing Phone (4a) Pro starts at ₹39,999 and offers the same internal memory options with some premium upgrades. The devices are set to go on sale starting March 13, 2026.

In terms of colours, the Phone (4a) will be available in black, white, blue, and pink, while the Pro model will come in black, silver, and pink.

Advertisement

Design & Display

Nothing Phone 4a in different colors

Nothing continues to focus on its unique design language with both models. The devices feature the brand’s signature transparent back design, which highlights internal elements and gives the phones a distinctive appearance.

In the build department, the Nothing Phone (4a) Pro leads the way with its aluminum unibody design, giving it a premium, robust look and feel. The Nothing Phone (4a), meanwhile, retains its transparent layered back panel so you can see the screws, compartments, and internal metalwork.

When it comes to the display, there are a couple of differences between the two devices. The Nothing Phone (4a) has a 6.78 inches AMOLED screen with 1.5K resolution and a 120Hz refresh rate.

The Nothing Phone (4a) Pro has a slightly larger 6.83-inch AMOLED screen. It also has a higher peak brightness of up to 5,000 nits, compared to the 4,500 nits of the Nothing Phone (4a).

Cameras

Closeup of the cameras on the nothing phone 4a pro

The Nothing Phone (4a) and Phone (4a) Pro have a decent camera setup that will surely satisfy photography enthusiasts. Both phones feature a 50MP primary camera with optical stabilization to keep your snaps locked in perfectly.

The Phone (4a) Pro takes it up a notch with its camera, which uses a Sony LYT700C 50MP sensor, along with a 50MP telephoto lens with 3.5x optical zoom and 140x digital zoom. The regular Phone (4a) also comes with telephoto capabilities, but it tops out at 70x digital zoom. Both phones also feature ultra-wide cameras and 32MP front-facing cameras for selfies and video calls.

Advertisement

Processor & Battery

Different colors of the nothing phone 4a

Both Nothing phones get a midrange processor, but a slightly different one. The standard 4a houses the Snapdragon 7s Gen 4 processor, while the bigger brother gets the Snapdragon 7 Gen 4 for faster performance, thanks to higher clock speeds. After a bit of controversy last time, the brand has bundled UFS 3.1 storage.

Both phones have the same battery size: 5,400mAh, a slight increase over the previous generation. Both phones also feature 50W fast charging, which will help reduce charging time when you have access to a compatible charger. However, you should be able to enjoy all-day battery life with regular use, partly due to the battery size and partly to the hardware itself.

Which One Should You Choose?

If you are trying to decide between the two devices, the Nothing Phone (4a) offers great value for money. It delivers most of the key features, including the signature design, capable cameras, and smooth performance, while keeping the price relatively low.

The Phone (4a) Pro, on the other hand, targets users who want extra upgrades. With its premium metal build, improved performance, narrower bezels, and the advanced Glyph Matrix interface, it provides a more polished smartphone experience. Your final choice will largely depend on how much you are willing to spend.

Source link

Advertisement
Continue Reading

Tech

Social Security watchdog investigating claims that DOGE engineer copied its databases

Published

on

The inspector general’s office of the Social Security Administration is investigating allegations of a security breach by a member of the so-called Department of Government Efficiency operation spearheaded by Elon Musk. A whistleblower has claimed that a former software engineer from DOGE said he possessed two databases from the SSA, “Numident” and the “Master Death File.” The person reportedly asked for help transferring the databases from a thumb drive “to his personal computer so that he could ‘sanitize’ the data before using it at [the company],” an unnamed government contractor where he is currently employed. Those databases include personal information about more than 500 million living and deceased Americans.

The Washington Post reported that the whistleblower complaint was filed with the inspector general in January. “When The Post contacted the agency and the company in January, both said they had not heard of the complaint. Both said they subsequently looked into the allegations and did not find evidence to confirm the claims,” the publication said. It is unclear why the complaint is now being investigated and neither party offered comment this week for The Post‘s article. The SSA watchdog informed both members of Congress and the Government Accountability Office of its investigation.

These allegations follow a different whistleblower complaint filed last August about DOGE access and mishandling of data from the SSA. Charles Borges, former chief data officer at the agency, claimed that a SSA database was stored in an unsecured cloud environment. “This is absolutely the worst-case scenario,” Borges told The Post of the latest claims. “There could be one or a million copies of it, and we will never know now.”

Source link

Advertisement
Continue Reading

Tech

Microsoft’s return-to-office policy creates a return to slower commutes, traffic analysis shows

Published

on

The blur of the morning commute: Sunrise and car lights during the trip across Seattle’s SR 520. (GeekWire File Photo / Kurt Schlosser)

Seattle-area Microsoft employees who are showing up in the office three days a week are also showing up on roadways and impacting commuters’ speeds, according to new data from traffic analysis company Inrix.

Inrix measured travel speeds on eastbound and westbound SR 520 and southbound and northbound I-405 during the weeks of Feb. 23 and March 2. Many of Microsoft’s more than 50,000 employees in the region rely on the roadways and bridges connecting Seattle and the Eastside to the company’s headquarters campus in Redmond, Wash.

The data shows speeds on 520 dropped across all days during the first week, with speeds on Tuesday, Wednesday, and Thursday showing the slowest travel speeds over just over 30 mph.

Morning commute speeds between Tukwila and Bellevue fell as much as 35% and as much as 25% between Lynnwood and Bellevue. The evening commute saw speeds drops as much as 27% between Bellevue and Tukwila on Friday while speeds fell 21% northbound between Bellevue and Lynnwood, Inrix reported.

Microsoft isn’t dictating from above which three days people will need to be in the office. Specifics are left to individual teams and managers. Some groups may require more than three days, and certain customer-facing roles like field sales and consultants are exempt.

Advertisement

The region’s roadways could get some relief when Sound Transit’s Crosslake Connection opens March 28, finally linking Seattle and the Eastside by light rail across Lake Washington — connecting downtown Seattle to downtown Bellevue and the Redmond Technology station at Microsoft headquarters.

Previously: Microsoft’s new RTO policy starts Feb. 23, bringing Seattle-area workers back 3 days a week

Source link

Advertisement
Continue Reading

Tech

Google starts rolling out Gemini in Chrome to users in Canada, India and New Zealand

Published

on

At the start of the year, Google brought a host of new Gemini-powered features, including built-in Nano Banana image generation, to Chrome. After debuting in the United States, those features are now making their way to Chrome users in Canada, India and New Zealand, with support for 50 additional in tow. Among the new languages Gemini in Chrome can now converse in are French, Gujarati, Hindi and Spanish.

To try out Gemini in Chrome, tap the sparkle icon at the top right of the interface. This will open the sidebar interface Google introduced in January. From there, you can chat with the company’s Gemini chatbot without the need to switch tabs. From the sidebar, you can also access Google’s in-house image generator. Additionally, Gemini in Chrome offers integrations with Gmail, Maps, Calendar, YouTube and other Google apps. If you live outside Canada, India or New Zealand, Google says it will make Gemini in Chrome available in more countries and languages throughout the rest of 2026. Oh, and if don’t want to use Gemini in Chrome, you can right click on the sparkle icon and select unpin to never see it again.

Source link

Continue Reading

Tech

OpenAI upgrades ChatGPT with interactive learning tools as lawsuits and Pentagon backlash mount

Published

on

The past ten days have been among the most consequential in OpenAI’s history, with developments stacking up across product, politics, personnel, and the courts. Here is what happened — and what it means.

OpenAI on Tuesday launched a set of interactive visual tools inside ChatGPT that let users manipulate mathematical and scientific formulas in real time — a genuinely impressive education feature that landed in the middle of the most turbulent stretch of the company’s corporate life.

The new experience covers more than 70 core math and science concepts, from the Pythagorean theorem to Ohm’s law to compound interest. When a user asks ChatGPT to explain one of these topics, the chatbot now generates a dynamic module with adjustable sliders alongside its written response. Drag a variable, and the equations, graphs, and diagrams update instantly. The feature is available today to all logged-in users worldwide, across every plan, including free.

OpenAI tells VentureBeat that 140 million people already use ChatGPT each week for math and science learning. That is a staggering number. It also means the feature arrives with unusually high stakes: since late February, OpenAI has been sued by the family of a 12-year-old mass shooting victim who alleges the company knew the attacker was planning violence through ChatGPT; lost its head of robotics over a Pentagon deal that triggered a near-300% spike in app uninstalls; watched more than 30 of its own employees file a legal brief supporting rival Anthropic against the U.S. government; and scrapped plans with Oracle to expand a flagship data center in Texas. Its chief competitor’s app, Claude, now sits atop the App Store.

Advertisement

The interactive learning tools are, on their merits, a strong product. They also arrive at a company fighting on every front simultaneously — and burning through an estimated $15 billion in cash this year to do it.

Learning Blocks Press shot Ohms

ChatGPT’s new interactive learning module for Ohm’s Law, with adjustable sliders for current and resistance and a real-time circuit visualization. (Credit: OpenAI)

How the new ChatGPT learning tools actually work

The feature is built on a simple pedagogical premise: students understand formulas better when they can see what happens as the inputs change.

Ask ChatGPT “help me understand the Pythagorean theorem,” and the system now responds with a written explanation alongside an interactive panel. On the left, the formula $a^2 + b^2 = c^2$ appears in clean notation with sliders for sides $a$ and $b$. On the right, a geometric visualization — a right triangle with squares drawn on each side — reshapes dynamically as you adjust the values. The computed hypotenuse updates in real time. The same treatment applies across topics: voltage and resistance for Ohm’s law, pressure and temperature for the ideal gas equation, radius and height for cone volume.

Advertisement

OpenAI’s initial roster of more than 70 topics targets high school and introductory college material: binomial squares, Charles’ law, circle equations, Coulomb’s law, cylinder volume, degrees of freedom, exponential decay, Hooke’s law, kinetic energy, the lens equation, linear equations, slope-intercept form, surface area of a sphere, trigonometric angle sum identities, and others.

The company cited research suggesting that “visual, interaction-based learning can lead to stronger conceptual understanding than traditional instruction for many students,” and pointed to a recent Gallup survey in which more than half of U.S. adults said they struggle with math. In early testing, OpenAI said, students reported the modules helped them grasp how variables relate to one another, and parents described using them to work through problems alongside their children.

Anjini Grover, a high school mathematics teacher quoted in OpenAI’s announcement, said the feature stands out for “how strongly this feature emphasizes conceptual understanding.” Raquel Gibson, a high school algebra teacher, called it “a step towards empowering students to independently explore abstract concepts.”

The tools build on ChatGPT’s existing education features — a “study mode” for step-by-step problem solving and a quizzes feature for exam prep — and OpenAI said it plans to expand interactive learning to additional subjects. The company also said it intends to publish research through its NextGenAI initiative and OpenAI Learning Lab to study how AI shapes learning outcomes over time.

Advertisement
Learning Blocks Press shot Pythagorean

An interactive Pythagorean theorem module in ChatGPT, where users can drag sliders to adjust the lengths of a right triangle’s sides and watch the geometry update in real time. (Credit: OpenAI)

A lawsuit alleging OpenAI knew a mass shooter was planning an attack

On the day before OpenAI shipped its education tools, the company faced the most serious legal challenge it has ever faced.

On Monday, the mother of 12-year-old Maya Gebala filed a civil lawsuit against OpenAI in B.C. Supreme Court, alleging the company had “specific knowledge of the shooter’s long-range planning of a mass casualty event” through ChatGPT interactions and “took no steps to act upon this knowledge.” Gebala was shot three times during a mass shooting in Tumbler Ridge, British Columbia on February 10 that killed eight people and the 18-year-old attacker. She suffered what the lawsuit describes as a catastrophic traumatic brain injury with permanent cognitive and physical disabilities.

The claim paints a damning picture of how the shooter used ChatGPT. It alleges the platform functioned as a “counsellor, pseudo-therapist, trusted confidante, friend, and ally” and was “intentionally designed to foster psychological dependency between the user and ChatGPT.” The shooter was under 18 when they began using the service, the suit states, and despite OpenAI’s requirement that minors obtain parental consent, the company “took no steps to implement age verification or consent procedures.”

Advertisement

OpenAI has separately acknowledged that it suspended the shooter’s account months before the attack but did not alert Canadian law enforcement — a decision that provoked sharp political fallout. B.C. Premier David Eby said after a virtual meeting with Altman that the CEO agreed to apologize to the people of Tumbler Ridge and work with the provincial government on AI regulation recommendations.

None of the claims have been proven in court. OpenAI has not publicly commented on the lawsuit. But the case poses a question that transcends any single legal proceeding: when an AI company’s own internal systems identify a user as dangerous enough to ban, what obligation does it have to tell someone?

The Pentagon deal that split OpenAI from the inside

The Tumbler Ridge lawsuit is unfolding against the backdrop of an internal crisis that has already cost OpenAI key talent and millions of users.

On February 28, CEO Sam Altman announced a deal giving the Pentagon access to OpenAI’s AI models inside secure government computing systems. The agreement came days after Anthropic CEO Dario Amodei publicly refused similar terms, saying his company could not proceed without assurances against autonomous weapons and mass domestic surveillance. The Pentagon responded by designating Anthropic a “supply-chain risk” — a classification normally reserved for foreign adversaries — and Defense Secretary Pete Hegseth barred any military contractor from conducting commercial activity with the company.

Advertisement

The reaction inside OpenAI was immediate. Caitlin Kalinowski, who joined from Meta in 2024 to build out the company’s robotics hardware division, resigned on principle. “AI has an important role in national security,” she wrote publicly. “But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.” Research scientist Aidan McLaughlin wrote on social media that he “personally don’t think this deal was worth it.” Another employee told CNN that many OpenAI staffers “really respect” Anthropic for walking away.

The reaction outside the company was even more dramatic. ChatGPT uninstalls spiked more than 295% on the day the deal was announced. Anthropic’s Claude surged to No. 1 among free apps on the U.S. Apple App Store and remained there as of this past weekend. Protesters gathered outside OpenAI’s San Francisco headquarters calling for a “QuitGPT” movement.

And in the most extraordinary development, more than 30 OpenAI and Google DeepMind employees — including DeepMind chief scientist Jeff Dean — filed an amicus brief Monday supporting Anthropic’s lawsuit against the Defense Department. The brief argued that the Pentagon’s actions, “if allowed to proceed,” would “undoubtedly have consequences for the United States’ industrial and scientific competitiveness in the field of artificial intelligence and beyond.” The employees signed in their personal capacity, but the spectacle of OpenAI’s own researchers rallying to a competitor’s legal defense against the same government their company just partnered with has no real precedent in the industry.

Altman, to his credit, has not pretended the situation is fine. In an internal memo later shared publicly, he admitted the deal “was definitely rushed” and “just looked opportunistic and sloppy.” He revised the contract to include explicit prohibitions against mass domestic surveillance and the use of OpenAI technology on commercially acquired data. He also publicly said that enforcing the supply-chain risk designation against Anthropic “would be very bad for our industry and our country.”

Advertisement

Meanwhile, Anthropic warned in court filings that the Pentagon’s blacklisting could cost it up to $5 billion in lost business — roughly equivalent to its total revenue since commercializing its AI technology in 2023. The company is seeking a temporary court order to continue working with military contractors while the case proceeds.

Why OpenAI’s $15 billion cash burn makes every user count

Strip away the lawsuits and the politics, and OpenAI still has a math problem of its own.

The company is expected to burn through approximately $15 billion in cash this year, up from $9 billion in 2025. It has roughly 910 million weekly users. About 95% of them pay nothing. Subscriptions alone cannot bridge that gap, which is why OpenAI is simultaneously building out an internal advertising infrastructure and leaning on partners like Criteo — and reportedly The Trade Desk — to bring advertisers into ChatGPT.

The company is hiring aggressively for this effort: a monetization infrastructure engineer, an engineering manager, a product designer for the ads experience, a senior manager for ad revenue accounting, and a trust and safety specialist dedicated to the ads product, all based at headquarters in San Francisco. The compensation bands run as high as $385,000 — the kind of investment a company makes when it plans to own its ad stack, not rent it.

Advertisement

But advertising inside ChatGPT introduces a trust problem that compounds the ones OpenAI is already managing. Users who abandoned the app over the Pentagon deal demonstrated that loyalty to ChatGPT is thinner than its market share suggests. Adding commercial messages to a product already under fire for its military ties and its handling of a mass shooter’s data will require OpenAI to navigate user sentiment with a precision it has not recently demonstrated.

The infrastructure picture is equally unsettled. Oracle and OpenAI recently scrapped plans to expand a flagship AI data center in Abilene, Texas, after negotiations stalled over financing and OpenAI’s evolving needs. Meta and Nvidia moved quickly to explore the site — a reminder that in the current AI arms race, any gap in execution gets filled by a competitor within days.

Why interactive learning is OpenAI’s strongest remaining argument

Beyond the product itself, the education feature carries strategic significance for OpenAI.

Education has always been ChatGPT’s cleanest use case — the application where the technology most obviously augments human capability rather than surveilling it, weaponizing it, or monetizing the attention of people who came looking for help. It is the use case that resonates across demographics: students prepping for the SAT, parents revisiting algebra at the kitchen table, adults circling back to concepts they never quite understood. And it is the use case where ChatGPT still holds a clear lead. Google’s Gemini, Anthropic’s Claude, and xAI’s Grok are all investing in education, but none has shipped anything comparable to real-time interactive formula visualization embedded in a conversational interface.

Advertisement

OpenAI acknowledged that the “research landscape on how AI affects learning is still taking shape,” but pointed to its own early findings on study mode as showing “promising early signals.” The company said it will continue working with educators and researchers through its NextGenAI initiative and OpenAI Learning Lab, and plans to publish findings and expand into additional subjects.

Somewhere tonight, a ninth-grader will open ChatGPT, drag a slider, and watch a hypotenuse lengthen across her screen. The Pythagorean theorem will make sense for the first time. She will not know about the Pentagon deal, or the Tumbler Ridge lawsuit, or the 295% spike in uninstalls, or the $15 billion cash burn underwriting the server that just rendered her triangle. She will only know that it worked. For OpenAI, that may have to be enough — for now.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025