Connect with us
DAPA Banner

Tech

What Students Learned After Chatting With A 1960s Therapist-Bot

Published

on

One student told her that the chatbot was “gaslighting.” Another student thought the chatbot wasn’t a very good therapist and didn’t help with any of their issues.

More people of all ages are substituting chatbots for licensed mental health professionals, but that’s not what these students were doing. They were talking about ELIZA — a rudimentary therapist chatbot, built in the 1960s by Joseph Weizenbaum, that reflects users’ statements back at them as questions.

In fall 2024, researchers at EdSurge peeked into classrooms to see how teachers were wrangling the AI industrial revolution. One teacher, a middle school educational technology instructor at an independent school in New York City, shared a lesson plan she designed on generative AI. Her goal was to help students understand how chatbots really work so they could program their own.

Compared to the AI chatbots students have used, the ELIZA chatbot was so limited that it frustrated students almost immediately. ELIZA kept prompting them to “tell me more,” as conversations went in circles. And when students tried to insult it, the bot calmly deflected: “We were discussing you, not me.”

Advertisement

The teacher noted that her students felt that “As a ‘therapist’ bot, ELIZA did not make them feel good at all, nor did it help them with any of their issues.” Another tried to diagnose the problem more precisely: ELIZA sounded human, but it clearly didn’t understand what they were saying.

That frustration was part of the lesson. It was important to teach her students to critically investigate how chatbots work. This teacher created a sandbox for students to engage in what learning scientists call productive struggle.

In this research report, I’ll dive into the learning science behind this lesson, exploring how it not only helps students learn more about the not-so-magical mechanics of AI, but also includes emotional intelligence exercises.

The students’ responses tickled me so much, I wanted to give ELIZA a try. Surely, she could help me with my very simple problems.

Advertisement
A test conversation between an EdSurge researcher and a model of ELIZA, the first ever AI chatbot developed by Joseph Weizenbaum in the 1960s. This model chatbot was developed by Norbert Landsteiner and accessed from masswerk.at/elizabot/.
A test conversation between an EdSurge researcher and a model of ELIZA, the first ever AI chatbot developed by Joseph Weizenbaum in the 1960s. This model chatbot was developed by Norbert Landsteiner and accessed from masswerk.at/elizabot/.

The Learning Science Behind the Lesson

The lesson was part of a broader EdSurge Research project examining how teachers are approaching AI literacy in K-12 classrooms. This teacher was part of an international group of 17 teachers of third through 12th graders. Several of the participants designed and delivered lesson plans as part of the project. This research report describes one lesson a participant designed, what her students learned, and what some of our other participants shared about their students’ perceptions of AI. We’ll end with some practical uses for these insights. There won’t be anymore of my tinkering with ELIZA — unless anyone thinks she could help with my “toddler-ing” problem.

Rather than teaching students how to use AI tools, this teacher used a pseudo-psychologist to focus on teaching how AI works and its discontents. This approach infuses lots of skill-building exercises. One of those skills is part of building emotional intelligence. This teacher had students use a predictably frustrating chatbot, then program their own chatbot that she knew wouldn’t work without the magic ingredient — that is, the training data. What ensued was middle school students name-calling and insulting the chatbot, then figuring out on their own how chatbots work and don’t work.

This process of encountering a problem, getting frustrated, then figuring it out helps build frustration tolerance. This is the skill that helps students work through difficult or demanding cognitive tasks. Instead of procrastinating or disengaging as they climb the scaffold of difficulty, they learn coping strategies.

Another important skill this lesson teaches is computational thinking. It’s hard to keep up with the pace of tech development. So instead of teaching students how to get the best output from the chatbot, this lesson teaches students how to design and build a chatbot themselves. This task, in itself, could boost a student’s confidence in problem-solving. It also helps them learn to decompose an abstract concept into several steps, or in this case, reduce what feels like magic to its simplest form, recognize patterns, and debug their chatbots.

Why Think When Your Chatbot Can?

Jeannette M. Wing, Ph.D., Columbia University’s executive vice president for research and a professor of computer science, popularized the term “computational thinking.” About 20 years ago, she said: “Computers are dull and boring; humans are clever and imaginative.” In her 2006 publication about the utility and framework of computational thinking, she explains the concept as “a way that humans, not computers, think.” Since then, the framework has become an integral part of computer science education, and the AI influx has dispersed the term across disciplines.

Advertisement

In a recent interview, Wing advocates that “computational thinking is more important than ever,” as both industry and academia computer scientists agree that the ability to code is less important than the core skills that differentiate a human and a computer. Research on computational thinking shows consistent evidence that this is a core skill that prepares students for advanced study across subjects. This is why teaching the skills, not the tech, is a priority in a rapidly changing tech ecosystem. Computational thinking is also an important skill for teachers.

The teacher in the EdSurge Research study demonstrated to her students that, without a human, ELIZA’s clever responses are only limited to its catalog of programmed responses. Here’s how the lesson went. Students began by interacting with ELIZA, then they moved into the MIT App Inventor to code their own therapist-style chatbots. As they built and tested them, they were asked to explain what each coding block did and to notice patterns in how the chatbot responded.

They realized that the bot wasn’t “thinking” with its magical brain. It was simply replacing words, restructuring sentences, and spitting them back out as questions. The bots were quick, but not “intelligent” without information in its knowledge base, so it couldn’t actually answer anything at all.

This was a lesson in computational thinking. Students decomposed the systems into parts, understanding inputs and outputs, and tracing logic step by step. Students learned to appropriately question the perceived authority of technology, interrogate outputs, and distinguish between superficial fluency and actual understanding.

Advertisement

Trusting Machines, Despite Flaws

The lesson became a bit more complicated. Even after dismantling the illusion of intelligence, many students expressed strong trust in modern AI tools, especially ChatGPT, because it served its purpose more often than ELIZA.

They understand its flaws. Students said, “ChatGPT can sometimes give you the wrong answer and misinformation,” while simultaneously acknowledging that, “Overall, it’s been a really useful tool for me.”

Other students were pragmatic. “I use AI to make tests and study guides,” a student explained. “I collect all my notes and upload them so ChatGPT can create practice tests for me. It just makes schoolwork easy for me.”

Another was even more direct: “I just want AI to help me get through school.”

Advertisement

Students understood that their homemade chatbots lacked the intelligent allure of ChatGPT. They also understood, at least conceptually, that large language models work by predicting text based on patterns in data. But their trust in modern AI came from social signals, rather than from their understanding of its mechanics.

Their reasoning was understandable: if so many people use these tools, and companies are making so much money from them, they must be trustworthy. “Smart people built it,” one student said.

This tension showed up repeatedly across our broader focus groups with teachers. Educators emphasized limits, bias, and the need for verification. On the other hand, students framed AI as a survival tool, a way to reduce workload, and to manage academic pressure. Understanding how AI works didn’t automatically reduce usage or reliance on it.

Why Skills Matter More Than Tools

This lesson did not immediately transform the students’ AI usage. It did, however, demystify the technology and help students see that it’s not magic that makes technology “intelligent.” This lesson taught students that chatbots are large language models that perform human cognitive functions using prediction, but the tools are not humans with empathy and other inimitable human characteristics.

Advertisement

Teaching students to use a specific AI tool is a short-term strategy and aligns with the heavily debated banking model of education. Tools change like nomenclature, and these changes reflect sociocultural and paradigm shifts. What doesn’t change is the need to reason about systems, question outputs, understand where authority and power originate, and to solve problems using cognition, empathy, and interpersonal relationships. Research on AI literacy increasingly points in this direction. Scholars argue that meaningful AI education focuses less on tool proficiency and more on helping learners reason about data, models, and sociotechnical systems. This classroom brought those ideas to life.

Why Educators’ Discretion Matters

This lesson gave students the language and experience to think more clearly about generative AI. In a time when schools feel pressure to either rush AI adoption or shut it down entirely, educators’ discretion and expertise matters. As more chatbots are released into the wild of the world wide web, guardrails are important, because chatbots are not always safe without supervision and guided instruction. Understanding how chatbots work helps students develop, over time, the ethical and moral decision-making skills for responsible AI usage. Teaching the thinking, rather than the tool, won’t immediately resolve every tension students and teachers feel about AI. But it gives them something more durable than tool proficiency, like the ability to ask better questions, and that skill will matter long after today’s tools are obsolete.

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Today’s NYT Mini Crossword Answers for March 22

Published

on

Looking for the most recent Mini Crossword answer? Click here for today’s Mini Crossword hints, as well as our daily answers and hints for The New York Times Wordle, Strands, Connections and Connections: Sports Edition puzzles.


Need some help with today’s Mini Crossword? It’s not too tough, but 7-Across made me stop and start thinking of five-letter beverage brands. Read on for all the answers. And if you could use some hints and guidance for daily solving, check out our Mini Crossword tips.

If you’re looking for today’s Wordle, Connections, Connections: Sports Edition and Strands answers, you can visit CNET’s NYT puzzle hints page.

Advertisement

Read more: Tips and Tricks for Solving The New York Times Mini Crossword

Let’s get to those Mini Crossword clues and answers.

completed-nyt-mini-crossword-puzzle-for-march-22-2026.png

The completed NYT Mini Crossword puzzle for March 22, 2026.

Advertisement

NYT/Screenshot by CNET

Mini across clues and answers

1A clue: Jost of “Saturday Night Live”
Answer: COLIN

6A clue: German wine valley whose name rhymes with “wine”
Answer: RHINE

7A clue: Big name in root beer
Answer: AANDW

Advertisement

8A clue: Common slot machine symbol
Answer: FRUIT

9A clue: James Talarico’s state
Answer: TEXAS

Mini down clues and answers

1D clue: Cunning skill
Answer: CRAFT

2D clue: Chicago airport
Answer: OHARE

Advertisement

3D clue: Operating system on which Android is partly based
Answer: LINUX

4D clue: World’s most populous country
Answer: INDIA

5D clue: Small salamanders
Answer: NEWTS

Advertisement

Source link

Continue Reading

Tech

Samsung’s upcoming Galaxy foldables could get a charging speed boost

Published

on

Samsung’s next generation of foldable phones could bring some changes to charging, though not all of them might be what fans are hoping for. According to recent certification listings spotted via SammyGuru, upcoming devices like the Galaxy Z Fold 8 and a new “Wide Fold” variant have appeared on China’s 3C database, hinting at potential updates to charging capabilities.

These listings typically reveal wired charging specs ahead of launch, making them an early indicator of what to expect. But here’s the catch: the “upgrade” might not be as big as it sounds.

What do the leaks actually reveal?

Two upcoming devices, SM-F9710 and SM-F9760, are believed to be the Chinese variants of the Galaxy Z Fold 8 and a new “Galaxy Z Wide Fold.” These listings show support for 15V at 3A charging, which translates to 45W wired charging. If accurate, that would mark a noticeable jump over previous Fold models, which have typically been limited to 25W wired charging.

However, a separate listing for what’s believed to be the Galaxy Z Flip 8 shows 9V at 2.77A (~25W) charging, essentially unchanged from its predecessor. So while the Fold lineup may finally see a boost, the Flip series appears to be sticking with the same charging speeds for now.

How big of an upgrade is this?

For the Fold lineup, this is actually a meaningful upgrade. Samsung has stuck with 25W charging for years, so moving to 45W would finally bring it closer to its Galaxy S Ultra devices and noticeably cut down charging times. That said, these numbers only apply to wired charging, as 3C listings don’t reveal wireless speeds.

For buyers, this is a welcome but uneven improvement. The Fold 8 and Wide Fold could see a solid boost, while the Flip 8 may remain unchanged, creating a clear divide in the lineup. It’s a step in the right direction, but not quite the full upgrade many were hoping for. Especially when you already have players like OnePlus and other Chinese brands that go well beyond 100W.

Advertisement

Source link

Continue Reading

Tech

Tesla’s Terafab Brings Manufacturing Power to Match the Scale of Space

Published

on

Elon Musk Terafab Tesla Largest Chip Factory
Elon Musk made a game-changing announcement hours ago when he revealed plans for Tesla’s Terafab during a live event, taking its work on vehicles and robots literally out of this world. The initiative is a game changer, bringing together SpaceX and xAI to create the world’s largest chip factory. The sheer scale of the operation is mind-boggling, since Terafab will be capable of producing 1 trillion watts of finished chips every year, all under one gigantic roof that will house logic circuits, memory storage, and final packaging.



All of this is important because we desperately need a reliable mechanism to generate solar energy that can be beamed back from space. Terafab is specifically built to accomplish just that. We’re talking about launching an incredible 100 million tons of capture equipment into orbit EVERY YEAR. To accomplish this, we must be able to move millions of tons of material year after year. Once in orbit, solar-powered satellites will conduct all of the AI heavy lifting, with millions of Tesla Optimus robots on hand to erect and maintain those structures well above the good old earth.

Each of those Optimus robots is a significant undertaking, as they require between 100 and 200 billion watts of chips just to function. When you factor in the satellites, you can see the tremendous demand we’re talking about: trillions of watts of chips that no existing chip manufacturer can possibly offer, at least not yet. According to projections, we will have the same shortage until 2030.


That is where Terafab comes in, since it is specifically designed to bridge that gap, with the kind of huge capacity that has the ability to overcome the hurdles that have been holding back both ground-based robot fleets and processing power in orbit. To get it erected, the construction team will use established launch techniques to transport the enormous cargo into place. To get the factory up and running, robots that are already in development will take on assembly tasks that are simply too dangerous for humans to do on a regular basis. As a result, we will have a consistent supply of chips to meet our rising requirements on Earth and beyond.

Advertisement

The driving factor behind all of this is a strong desire to explore the universe, not just envision what’s out there, but to experience it firsthand. As one of the speakers put it, “understanding comes only from direct experience out there in the universe,” and Terafab is the first step in translating that idea into something concrete, something that anyone can track, from the start of creation to the end of delivery.

Advertisement

Source link

Continue Reading

Tech

Reworked Apple Watch avoids ban, but Masimo battle escalates

Published

on


The decision, made public on Thursday, concludes that Apple’s latest implementation of pulse-oximetry functionality falls outside the scope of Masimo’s asserted rights. The full ITC commission will now review the judge’s ruling and decide whether to adopt it – a step that will determine whether the redesigned watches remain protected…
Read Entire Article
Source link

Continue Reading

Tech

Daily Deal: The 2026 C# Course Bundle

Published

on

from the good-deals-on-cool-stuff dept

The 2026 C# Course Bundle offers 8 courses that cover everything C#. You’ll master the fundamentals, explore object-oriented programming, and start building your own apps in no time. It’s on sale for $40.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Filed Under: daily deal

Source link

Advertisement
Continue Reading

Tech

‘We should regard it as a privilege to be stepping stones to higher things’: How Arthur C Clarke predicted the rise of AGI and the looming demise of humanity back in 1964

Published

on

While debate over the timeline – or even the potential – for artificial general intelligence (AGI) rages on in 2026, one futurist may have predicted the breakthrough more than 60 years ago.

Noted British science fiction writer and futurist Arthur C. Clarke touted the arrival of AGI during an interview at the 1964 World’s Fair in New York City.

Source link

Continue Reading

Tech

This monitor claims paper-like viewing and huge energy savings by using ambient light instead of relying entirely on traditional backlighting

Published

on


  • Hannspree Hybri monitor uses ambient light to significantly reduce energy consumption
  • Reflective display design aims to mimic paper-like readability and comfort
  • Automatic switching enables backlight use in low ambient light conditions

The Hannspree Hybri monitor attempts to merge paper-like readability with modern display performance, claiming an 80% reduction in energy use through innovative use of ambient light.

At illumination levels above 1000lux, common in offices, classrooms, and outdoor-adjacent spaces, the monitor reflects surrounding light instead of relying solely on a backlight.

Source link

Advertisement
Continue Reading

Tech

Reddit wants to check if you’re using the iPhone’s Face ID camera

Published

on

Reddit may soon ask users to prove they’re human, and it might involve your face. During a TBPN podcast, Reddit’s CEO, Steve Huffman, confirmed that the platform is exploring new identity verification methods, including using Face ID or Touch ID-style authentication, to tackle its growing bot problem.

RDDT requiring Face ID was not something I had on my bingo card but something has got to be done about all the fake / botted content — I just don’t know how to sell face-scanning to redditors or even lurkers. https://t.co/7e7K3Di4ip

— Alexis Ohanian 🗽 (@alexisohanian) March 21, 2026

The idea is simple: as AI-generated accounts become more convincing, Reddit wants stronger ways to confirm that users are real people and not bots pretending to be one.

Why is Reddit considering Face ID-style verification?

Unfortunately, bots are getting too good. Huffman has previously emphasized keeping the platform “human,” and this move fits right into that strategy. AI-generated content and automated accounts are becoming harder to detect, making moderation more challenging and threatening the authenticity of discussions.

Advertisement

As such, verification methods like Face ID or biometric checks could act as a quick way to confirm a real person is behind an account, without requiring traditional ID uploads. But of course, it’s not that simple.

So… are we really scanning faces now?

Reddit isn’t going full sci-fi just yet. The company is still “weighing” its options, which could mean optional verification for certain features, regions, or accounts rather than forcing everyone to scan their face. We’ve already seen a preview of this in places like the UK, where Reddit uses selfies or ID checks for age verification.

The next step could make things feel a lot more seamless and a bit more invasive. Instead of uploading IDs, Reddit may lean on device-level tools like Face ID to confirm you’re human, turning verification into something that happens in the background rather than a full process. Of course, that’s where things get messy.

Biometric checks raise big questions around privacy, data security, and consent, and users aren’t exactly thrilled about handing over their face to prove they’re not a bot. Reddit may be solving one problem, but it opens up another: how much verification is too much? Especially on a platform where anonymity is kind of the whole point?

Source link

Advertisement
Continue Reading

Tech

Google isn't backing away from Pentagon AI work, it's doubling down

Published

on


According to Business Insider, the issue came up during a January Google DeepMind town hall, where VP of Global Affairs Tom Lue said the company was “leaning more” into national security work.
Read Entire Article
Source link

Continue Reading

Tech

Scientists find all five genetic building blocks for life in asteroid Ryugu

Published

on


Researchers are still studying samples of Ryugu collected by the Japanese Aerospace Exploration Agency from its Hayabusa2 mission. After the first papers focused on the composition of the recovered material, a Japanese team has now found a “complete” set of genetic bases belonging to both DNA and RNA.
Read Entire Article
Source link

Continue Reading

Trending

Copyright © 2025