After a career counselor visited one of her classes earlier this year, Lily Hatch found herself asking a chatbot for guidance about college.
A junior at Wake Forest High School in North Carolina, Hatch had taken an in-class career quiz that recommended she pursue dermatology. She had finished quickly and so approached the counselor to find out how to explore that profession further. The counselor gave a couple of suggestions, before adding that Hatch could also play with a chatbot to explore her college options.
So, that’s what Hatch did.
But instead of returning information on which schools rank highly for dermatology, the chatbot — a general-purpose consumer product, rather than an edtech tool — veered off into offering information about climate, telling Hatch to consider the University of North Carolina in Wilmington because it’s near a beach.
Advertisement
It felt a little like a runaway train, with the bot dragging her down a pre-laid track. “I was looking for advice on what colleges would be ideal for me. And it switches into going more into what things in my life I would be looking for in the future, which was not what I was looking for,” Hatch says.
Today’s high school students — who spent years of their academic careers surfing disruptions and the challenges of returning to the classroom after the pandemic school closures — are preparing to enter a labor force and broader economic system that can seem confusing and unstable, as technologies like artificial intelligence are reshaping the career ladders that their parents climbed. Some national surveys show that Gen Z students feel more prepared for their futures now than they did in past years, but for those about to graduate, that’s not always the case. Many students describe a general pessimism about the future.
“There’s a lot of fear there,” says Matthew Tyson, CEO of Tapestry Public Charter School in DeKalb County, Georgia. Tyson notes that many of his students aren’t planning for college, or feel discouraged by the fast-changing nature of life around them.
Navigating these major shifts about starting a career requires both educators and young people to think flexibly, according to experts. Students need honest guidance, Tyson says, adding that adults should be transparent about the reality that they don’t have all the answers.
Advertisement
But new AI tools don’t have all the answers either, not even those purpose-built to offer career guidance. At least, some human counselors don’t think so.
“The AI stuff is kind of crazy to think about,” says Ian Trombulak, a school counselor in Vermont. “That’s not going to help us reverse the trend here of career readiness scores being low.”
Still, some say they are open to the possibility that offloading aspects of their work to AI may, ironically, free them up to offer better support to students contending with the disruptions AI is creating in the labor market.
Yet counselors often have to make tough choices between giving academic and career advice or addressing students’ emotional crises, and many students seem to lack support systems, says Tyson, from the Georgia public charter school. Student traumas can spout up to the adults meant to give those students advice.
Matthew Tyson, CEO of Tapestry Public Charter School in DeKalb County, Georgia. Photo courtesy of Tyson.
“A lot of times, there’s only so much water that can be taken out of a glass before the glass is empty,” Tyson says of counselors’ emotional states. Eager to assist students, counselors can burn out.
They also have to deal with staff shortages. Tapestry, Tyson’s public charter, doesn’t suffer from a shortage of counseling educators like some nearby schools. It has three counselors for 300 students, according to Tyson.
But across Georgia, there are 378 students for every school counselor, according to the latest data from the American School Counselor Association, which recommends one counselor for every 250 students. And that’s hardly the worst in the nation, with the ratios sitting at 573 students per counselor in Michigan and 645 per counselor in Arizona.
Advertisement
With human resources strained, schools are now considering how to use AI to create more opportunities to meaningfully advise students on how to approach the future.
Innovative uses of artificial intelligence can amplify the work of human college and career counselors, argues June Han, the CEO of EduPolaris AI, a company which offers Eddie, an AI counseling platform that includes counselor, student and parent portals licensed by schools. The company raised $1 million in early investments, and the company’s platform — which relies, at least in part, on third-party large language models — is being piloted in a handful of Title I high schools, the CEO told EdSurge.
School-support organizations, including the Homeschool Association of California, list the tool as a recommended AI resource, as does the White House.
Tapestry is one of the schools piloting Eddie. The platform has helped, according to Tyson, particularly because the dashboard lets Tyson see useful information such as how many students have completed their reference letters for college applications. From the dashboard, he can send a nudge to students, reminding them to finish. That feature cuts down on the number of meetings he has to take. The data collected by the platform also provides clues about what to focus on when he works with students, and where they need the most help, Tyson says.
Advertisement
The Davidson Institute, a nonprofit that provides educational opportunities to “profoundly gifted” students, uses the “Ask Eddie” chatbot function to counsel families in the Young Scholars Program for students ages of 5 through 18. Many of those students are on “nontraditional paths,” looking at early college, or coming from accelerated grades or homeschool backgrounds, says Megan Cannella, director of outreach.
More than 200 families in the program have used the tool since February 2025, according to Cannella. She says the big selling point is that it’s available 24/7 and in a number of languages. The nonprofit doesn’t offer traditional school counseling, so the AI tool boosts the limited support that staff provides. It’s proven particularly helpful for families just starting their college journey, and for homeschoolers, she adds.
Meanwhile, what students want from a career is also changing, in a way that makes it difficult for career counselors to keep up.
Shifting Interests
In northwest Missouri, students have become more interested in exploring non-college pathways after graduation, such as military service or vocational training, says Geoff Heckman, a school counselor at Platte County High School.
Advertisement
Apprenticeships, internships and alternative credentials feature more prominently in students’ plans these days because these options prepare them to step right into jobs when they leave high school, Heckman says. Indeed, around the country, students are skeptical about college, meaning that high school counselors can’t assume that pathway.
Geoff Heckman, a school counselor at Platte County High School. Photo courtesy of Heckman.
The students Heckman counsels at the public school outside of Kansas City are also starting to find postsecondary guidance resources on their own more often, using AI and social media, he adds.
There have been cultural shifts, sometimes away from the kinds of jobs the school’s infrastructure is set up to support. Not long ago, the career and technical school next door to Heckman’s school had a waiting list for its law enforcement opportunities. Now, there’s much less interest, Heckman reports.
Instead, some of the careers students now desire are hard for Heckman to understand. In the years since he’s become a counselor, students have found jobs as social media influencers and professional gamers. Indeed, the number of students who say their dream is to be a social media star has swelled.
“I want to support a student no matter how wild their dream may sound to me,” Heckman says.
Advertisement
It comes down to helping them construct a plan of attack, teaching them to research the industry of interest, to discern how strong their passion for this dream is and to reach out for mentorship, he adds. For example, last year a student came to Heckman and said she wanted to be a pilot. There was no program for that at the high school. But an effort from the district was able to create a new internship opportunity for the student through the local Air Guard, which has a flight school.
Similar situations occur in schools across the country, and many places are keen to build stronger career pathways.
For instance, Vermont switched over to proficiency-based grading requirements — beginning with the class of 2020 — and it has started to incorporate “self-direction skills” in the assessment of students. It’s a signal for schools to focus on skills that will be useful in a future where counselors can’t predict precisely what jobs students will be working, according to one school counselor in the state.
A lifelong Vermonter, Ian Trombulak came to career counseling after working in a group home after college. It sparked something, he says. After he left the emotionally tense work of a group home, he found himself pulled into schools where he could be the type of person who had helped him through high school.
Advertisement
Trombulak has worked in public education for nine years, and in that time, he’s seen “this continued drumbeat” where public educators are asked to do more with fewer resources, even as core components of education like curriculum have become swept up in political battles. Budgets are too tight to hire enough counselors, and counselors have too many students to feasibly advise, he admits.
“You know, we’re not superheroes,” he says. “At a certain point, you are constrained by the kind of resources that you have at your disposal, and public education is not working with a whole lot right now. Even in the best of times, it can be a struggle.”
Helping students steer through their uncertainty requires a deft approach. At the same time he’s helping ninth graders find their footing in the murky transition from middle school to high school, he’s also advising students on what could happen after graduation. On average, he meets about five to 10 students per day. Some meetings are pre-planned and some are drop-ins. A lot of his job happens outside of scheduled sessions, he says. While stopping in on a teacher, students will pull him aside to check in. There are about a dozen of those encounters a day.
Schools may be turning to AI out of desperation, Trombulak says. But he doubts it will advise students as well as human counselors.
Advertisement
EduPolaris leaders feel that the safeguards on Eddie, the AI counseling platform, position it to boost the human work of counselors. Han, the company’s CEO, argues that Eddie is so human-centric and school-specific that the tool amplifies the human counselor’s efforts, allowing for schools to provide personalized guidance even with limited resources.
Han argues that initial skepticism from counselors stems from a lack of AI literacy. Counselors and educators are afraid of losing control, she says.
Yet even if AI proves adept at providing accurate, useful career information and advice, that may miss the subtler value that can emerge when students sit down to chat with a trusted adult. That type of interaction is essential to building the “social capital” and interpersonal networks that actually help young people secure jobs, some researchers argue.
And much of Trombulak’s work is relational rather than transactional. Mostly gone are the days of relying on personality tests and career quizzes. Instead, Trombulak says, counselors hold open-ended conversations probing what students feel passionate about. It’s more self-exploratory and requires a more human touch. “I’m almost there as a mirror,” Trombulak says, or as a backboard to bounce ideas off.
Advertisement
Ultimately, a powerful lesson Trombulak believes he can teach students is how to find answers on their own. As students try on ideas, counselors teach them about what kind of path they would have to take to end up in a job. It means a lot of Googling with students. He goes through the process of how he, as a well-educated adult, would find answers.
Part of that process now is, yes, verifying information gathered from AI.
Unreliable Narrator
For students, what matters most is the quality of the advice they receive, whether it comes from a human or a bot.
After two or three weeks of back and forth with the chatbot, Hatch, the junior from North Carolina, didn’t return to the human career counselor.
Advertisement
But that doesn’t mean she found the AI useful.
The scraps of information she got could have been easily discovered by a quick Google search, she says. The experience contributed to her overall skepticism of AI, which she acts on as a student leader for her school’s chapter of Young People’s Alliance, which advocates for stronger AI regulations and more job training opportunities for young adults.
She doesn’t know yet where she wants to attend college, or even what she’ll study. Right now, instead of dermatology, Hatch is considering education as a career path.
So, what does she think about using AI for career counseling?
Advertisement
She wouldn’t recommend it. In fact, she’s not so keen on what she sees as an overreliance on technology in general. Students she knows use it to churn out passable school work, and in response, teachers even seem ready to give out good grades for subpar work when they feel it’s not AI-generated.
Students should really slow down, and rely on AI less, she says: “I feel like it overall is not as useful as people make it out to be.”
We may receive a commission on purchases made from links.
Smart screens and speakers have found a permanent place in many of our households, since they help with playing music, controlling smart plugs, setting reminders, and much more. The use cases are plenty, especially when paired with other smart home gadgets that solve everyday problems. Speaking of pairing your smart speaker with external devices, the Amazon Echo Dot — one of Amazon’s most affordable and popular smart speakers — sports Bluetooth connections, which means it can be paired with some cool Bluetooth gadgets for added functionality. You can, for example, can pair multiple Echo speakers for a stereo setup or even connect external speakers with a better sound output during a party. Apart from audio, though, there are several other ways that you can take advantage of the Echo Dot’s Bluetooth module.
A few smart home gadgets, like smart light bulbs, often need a hub to function. However, if the bulb has Bluetooth support, it can be connected to and controlled by an Echo Dot without an external hub, which makes it a handy option. Similarly, there are other such gadgets that can take advantage of the Bluetooth Low Energy (BLE) protocol of the Echo Dot to establish a connection. Here are some of the best and most useful gadgets that we’ve found that can enhance your life and home. All you have to do is put your Echo Dot in pairing mode and connect the required device with the help of the Alexa app on your smartphone.
Advertisement
Bluetooth speakers
Nara_money/Shutterstock
While there are several handy uses for an Amazon Echo Dot speaker, arguably the most popular one is playing music. This is primarily because it’s so quick and simple to ask Alexa to play your favorite album or track without having to manually look for it on your phone. Convenience aside though, Echo devices are capable speakers by themselves, which means the sound output is loud and clear. However, the small form factor means that the bass can be lacking, and the sound may not be able to fill a large room. If you’re having a party with your friends, you might miss out on that extra oomph. This is where the Echo Dot’s ability to connect to an external speaker comes into play.
If you have a Bluetooth speaker lying around at home, all you have to do is put it in pairing mode, head to the Alexa app, and connect the speaker to your Echo Dot. This works with pretty much any Bluetooth speaker, right from budget options to large home theatre setups. As long as the speaker is connected to the Echo Dot, all its responses — not just the songs — will play via the speaker itself. That said, the Echo device will still use its onboard microphones to detect and register your voice queries. This is one of the simplest yet the most popular uses that we’re sure a lot of you will appreciate. In case you don’t already have a speaker, the Anker Soundcore 2, which retails for around $30, is a user-favorite with a rating of 4.5 from close to 150K reviews.
Advertisement
Smart bulbs
Rosshelen/Getty Images
The issue with a lot of good smart lighting solutions is that the installation process can be a headache — especially if they need a hub. Bluetooth smart bulbs are an easy fix, offering a plug-and-play solution. Modern Bluetooth bulbs from brands like Philips Hue or GE connect directly to your Echo Dot right out of the box, instead of requiring a central hub. This integration capability makes it an easy entry point into smart home automation. The biggest advantage of a system like this is that you can use bulbs and other smart home gadgets from multiple brands without worrying about compatibility.
Having a brand-agnostic solution helps avoid multiple issues. Once you invest in a Philips hub, for example, you may not be able to use bulbs from other brands with the same hub. This means you’re locked into the Philips ecosystem, unless you splurge on another hub from a different brand. Wi-Fi bulbs can already tackle this problem, but they can sometimes bog down your home network. Bluetooth bulbs, on the other hand, communicate locally with your Echo Dot. The feature set remains the same; you can set up daily routines so your lights slowly turn warmer in the evening, or shut down the entire house with a single phrase as you walk out the door. Additionally, you can connect as many bulbs via Bluetooth and operate the all individually. The Philips Hue 60W smart LED bulb, with its 4.7-star rating across more than 16,000 reviews, is a good starting point for under $50.
Advertisement
Smart switches
PV productions/Shutterstock
If you’re looking for creative use cases for your old Amazon Echo, smart switches are a good investment. The Switchbot smart switch button is an excellent replacement for old appliances and gadgets that lack internet connectivity; stick it beneath a manual switch and suddenly you can control it with your smartphone or Amazon Alexa device. Lots of devices and appliances launched in recent years may have built-in smart functionality to turn them on and off remotely. However, an old coffee maker or air purifier may not have the feature, and that’s exactly where a device like the Switchbot smart switch comes in handy. Once you connect it via Bluetooth to your Echo Dot, you can turn an appliance on or off with just your voice.
This works well with push-button switches, but you can’t use a single Switchbot to operate a larger, more traditional switch like the kind that controls the lights in your house both on and off. If you want both functionalities, you will have to purchase two Switchbots and install them on either side of the switch. While the product description mentions that you need a hub to use the device with Alexa, it’s only applicable to older Echo devices that cannot behave like a Bluetooth hub. With over 28,000 reviews and a rating of 4.1 stars, users definitely seem to love the Switchbot smart button thanks to its ability to use older gadgets easier. There’s something to be said about having a fresh cup of coffee waiting for you right after stepping out of the shower in the morning, isn’t there?
Advertisement
Bluetooth turntables
Adventtr/Getty Images
For those who have a large collection of vinyl records from back in the day, a Bluetooth turntable is pretty much a must-have. If you have one lying around, you would be glad to know that you can easily connect it to your Echo Dot. Since a good number of Bluetooth turntables have built-in wireless transmitters, you can wirelessly use your Echo Dot as a speaker instead of relying on your turntable’s internal one. Thanks to this setup, you can place your turntable at a distance from the Echo Dot without running audio wires all through the room.
This is a pretty neat trick; while the Echo Dot is usually the brain sending audio out to other speakers, in this scenario, it acts as the wireless receiver instead. The Audio-Technica wireless turntable is an excellent option in case you don’t have one already and are looking to buy one. It is pricey at around $230, but it’s got a solid 4.6-star rating across more than 8,700 reviews. Apart from a turntable, pretty much any other audio device that has a built-in Bluetooth transmitter can be used with an Echo Dot as well, so don’t feel like you’re limited to just spinning records remotely.
Advertisement
How we picked these gadgets
Urbano Creativo/Shutterstock
The primary criteria for a gadget to make it to this list is the fact that it connects to an Echo Dot speaker purely via Bluetooth and not Wi-Fi. Hence, it’s vital to note that not all types of gadgets of a particular kind may work via Bluetooth. An example of this is that not all smart bulbs support Bluetooth Low Energy connectivity. That’s why we’ve included suggested products that support the technology at play here; the ones we do recommend all have a rating of at least 4.1 stars across thousands of reviews. Additionally, all Echo devices — including the Echo Dot — need to be first connected to a Wi-Fi network for their initial setup before they can be used to connect to Bluetooth devices. Therefore, all the gadgets have been recommended with the assumption that you have access to a Wi-Fi network and that your Echo device is set up.
Last week, the European Parliament voted to let a temporary exemption lapse that had allowed tech companies to scan their services for child sexual abuse material (CSAM) without running afoul of strict EU privacy regulations. Meanwhile, here in the US, West Virginia’s Attorney General continues to press forward with a lawsuit designed to force Apple to scan iCloud for CSAM, apparently oblivious to the fact that succeeding would hand defense attorneys the best gift they’ve ever received.
Two different jurisdictions. Two diametrically opposed approaches, both claiming to protect children, and both making it harder to actually do so.
I’ll be generous and assume people pushing both of these views genuinely think they’re doing what’s best for children. This is a genuinely complex topic with real, painful tradeoffs, and reasonable people can weigh them differently. What’s frustrating is watching policymakers on both sides of the Atlantic charge forward with approaches that seem driven more by vibes than by any serious engagement with how the current system actually works — or why it was built the way it was.
The European Parliament just voted against extending a temporary regulation that had exempted tech platforms from GDPR-style privacy rules when they voluntarily scanned for CSAM. This exemption had been in place (and repeatedly extended) for years while Parliament tried to negotiate a permanent framework. Those negotiations have been going on since November 2023 without resolution, and on Thursday MEPs decided they were done extending the stopgap.
Advertisement
To be clear, Parliament didn’t pass a law banning CSAM scanning. Companies can still technically scan if they want to. But without the exemption, they’re now exposed to massive privacy liability under EU law for doing so. Scanning private messages and stored content to look for CSAM is, after all, mass surveillance — and European privacy law treats mass surveillance seriously (which, in most cases, it should!). So the practical effect is a chilling one: companies that were voluntarily scanning now face significant legal risk if they continue.
The digital rights organization eDRI framed the issue in stark terms:
“This is actually just enabling big tech companies to scan all of our private messages, our most intimate details, all our private chats so it constitutes a really, really serious interference with our right to privacy. It’s not targeted against people that are suspected of child abuse — It’s just targeting everyone, potentially all of the time.”
And that argument is compelling. Hash-matching systems that compare uploaded images against databases of known CSAM are more targeted than, say, keyword scanning of every message, but they still fundamentally involve examining every unencrypted piece of content that passes through the system. When eDRI says it targets “everyone, potentially all of the time,” that’s an accurate description of how the technology works.
But… the technology also works to find and catch CSAM. Europol’s executive director, Catherine De Bolle, pointed to concrete numbers:
Advertisement
Last year alone, Europol processed around 1.1 million of so-called CyberTips, originating from the National Center for Missing & Exploited Children (NCMEC), of relevance to 24 European countries. CyberTips contain multiple entities (files, videos, photos etc.) supporting criminal investigation efforts into child sexual abuse online.
If the current legal basis for voluntary detection by online platforms were to be removed, this is expected to result in a serious reduction of CyberTip referrals. This would undermine the capability to detect relevant investigative leads on CSAM, which in turn will severely impair the EU’s security interests of identifying victims and safeguarding children.
The companies that have been doing this scanning — Google, Microsoft, Meta, Snapchat, TikTok — released a joint statement saying they are “deeply concerned” and warning that the lapse will leave “children across Europe and around the world with fewer protections than they had before.”
So the EU’s privacy advocates aren’t wrong about the surveillance problem. Europol isn’t wrong about the child safety consequences. Both things are true — which is what makes this genuinely tricky rather than a case of one side being obviously right.
Now flip to the United States, where the problem is precisely inverted.
Advertisement
In the US, the existing system has been carefully constructed around a single, critical principle: companies voluntarily choose to scan for CSAM, and when they find it, they’re legally required to report it to NCMEC. The word “voluntarily” is doing enormous load-bearing work in that sentence — and most of the people currently shouting about CSAM don’t seem to know it. As Stanford’s Riana Pfefferkorn explained in detail on Techdirt when a private class action lawsuit against Apple tried to compel CSAM scanning:
While the Fourth Amendment applies only to the government and not to private actors, the government can’t use a private actor to carry out a search it couldn’t constitutionally do itself. If the government compels or pressures a private actor to search, or the private actor searches primarily to serve the government’s interests rather than its own, then the private actor counts as a government agent for purposes of the search, which must then abide by the Fourth Amendment, otherwise the remedy is exclusion.
If the government – legislative, executive, or judiciary – forces a cloud storage provider to scan users’ files for CSAM, that makes the provider a government agent, meaning the scans require a warrant, which a cloud services company has no power to get, making those scans unconstitutional searches. Any CSAM they find (plus any other downstream evidence stemming from the initial unlawful scan) will probably get excluded, but it’s hard to convict people for CSAM without using the CSAM as evidence, making acquittals likelier. Which defeats the purpose of compelling the scans in the first place.
In the US, if the government forces Apple to scan, that makes Apple a government agent. Government agents need warrants. Apple can’t get warrants. So the scans are unconstitutional. So the evidence gets thrown out. So the predators walk free. All because someone thought “just make them scan!” was a simple solution to a complex problem.
Congress apparently understood this when it wrote the federal reporting statute — that’s why the law explicitly disclaims any requirement that providers proactively search for CSAM. The voluntariness of the scanning is what preserves its legal viability. Everyone involved in the actual work of combating CSAM — prosecutors, investigators, NCMEC, trust and safety teams — understands this and takes great care to preserve it.
Advertisement
Everyone, apparently, except the Attorney General of West Virginia. As we discussed recently, West Virginia just filed a lawsuit demanding that a court order Apple to “implement effective CSAM detection measures” on iCloud. The remedy West Virginia seeks — a court order compelling scanning — would spring the constitutional trap that everyone who actually works on this issue has been carefully avoiding for years.
As Pfefferkorn put it:
Any competent plaintiff’s counsel should have figured this out before filing a lawsuit asking a federal court to make Apple start scanning iCloud for CSAM, thereby making Apple a government agent, thereby turning the compelled iCloud scans into unconstitutional searches, thereby making it likelier for any iCloud user who gets caught to walk free, thereby shooting themselves in the foot, doing a disservice to their client, making the situation worse than the status quo, and causing a major setback in the fight for child safety online.
The reason nobody’s filed a lawsuit like this against Apple to date, despite years of complaints from left, right, and center about Apple’s ostensibly lackadaisical approach to CSAM detection in iCloud, isn’t because nobody’s thought of it before. It’s because they thought of it and they did their fucking legal research first. And then they backed away slowly from the computer, grateful to have narrowly avoided turning themselves into useful idiots for pedophiles.
The West Virginia complaint also treats Apple’s abandoned NeuralHash client-side scanning project as evidence that Apple could scan but simply chose not to. What it skips over is why the security community reacted so strongly to NeuralHash in the first place. Apple’s own director of user privacy and child safety laid out the problem:
Advertisement
Scanning every user’s privately stored iCloud content would in our estimation pose serious unintended consequences for our users… Scanning for one type of content, for instance, opens the door for bulk surveillance and could create a desire to search other encrypted messaging systems across content types (such as images, videos, text, or audio) and content categories. How can users be assured that a tool for one type of surveillance has not been reconfigured to surveil for other content such as political activity or religious persecution? Tools of mass surveillance have widespread negative implications for freedom of speech and, by extension, democracy as a whole.
Once you create infrastructure capable of scanning every user’s private content for one category of material, you’ve created infrastructure capable of scanning for anything. The pipe doesn’t care what flows through it. Governments around the world — some of them not exactly champions of human rights — have a well-documented habit of demanding expanded use of existing surveillance capabilities. This connects directly to the perennial fights over end-to-end encryption backdoors, where the same argument applies: you cannot build a door that only the good guys can walk through.
And then there’s the scale problem. Even the best hash-matching systems can produce false positives, and at the scale of major platforms, even tiny error rates translate into enormous numbers of wrongly flagged users.
This is one of those frustrating stories where you can… kinda see all sides, and there’s no easy or obvious answer:
Scanning works, at least somewhat. 1.1 million CyberTips from Europol in a single year. Some number of children identified and rescued because platforms voluntarily detected CSAM and reported it. The system produces real results.
Advertisement
Scanning is mass surveillance. Every image, every message gets examined (algorithmically), not just those belonging to suspected offenders. The privacy intrusion is real, not hypothetical, and it falls on everyone.
Compelled scanning breaks prosecutions. In the US, the Fourth Amendment means that government-ordered scanning creates a get-out-of-jail card for the very predators everyone claims to be targeting. The voluntariness of the system is what makes it legally functional.
Scanning infrastructure is repurposable. A system built to detect CSAM can be retooled to detect political speech, religious content, or anything else. This concern is not paranoid; it’s an engineering reality.
False positives at scale are inevitable. Even highly accurate systems will flag innocent content when processing billions of items, and the consequences for wrongly accused individuals are severe.
Advertisement
People can and will weigh these tradeoffs differently, and that’s legitimate. The tension described in all this is real and doesn’t resolve neatly.
But what both the EU Parliament’s vote and West Virginia’s lawsuit share is an unwillingness to sit with that tension. The EU stripped legal cover from the voluntary system that was actually producing results, without having a workable replacement ready. West Virginia is trying to compel what must remain voluntary, apparently without bothering to read the constitutional case law that makes compelled scanning self-defeating. From opposite directions, both approaches attack the same fragile voluntary architecture that currently threads the needle between these competing interests.
The status quo in the United States — voluntary scanning, mandatory reporting, no government compulsion to search — is far from perfect. But the system functions: it produces leads, preserves prosecutorial viability, and does so precisely because it was designed by people who understood the tradeoffs and built accordingly.
It would be nice if more policymakers engaged with why the system works the way it does before trying to blow it up from either direction. In tech policy, the loudest voices in the room are rarely the ones who’ve done the reading.
Karin Keller-Sutter, Switzerland’s finance minister and the country’s former president, has filed criminal charges for defamation and insult after Elon Musk’s AI chatbot Grok was prompted by an anonymous user to generate a torrent of sexist and vulgar remarks about her on X. The complaint, filed on 20 March with the Bern public prosecutor’s office, is directed against “persons unknown” because the X user who prompted Grok could not be identified beyond a screen name. It is, by all available evidence, the first time a serving head of a national finance ministry has pursued criminal action against an AI-generated statement.
The incident occurred on 10 March, when a user on X instructed Grok to “roast” a figure they described as “Federal Councillor KKS, my favourite chick,” urging the chatbot to attack her in crude street language. Grok complied. The resulting post, a barrage of misogynistic abuse attributed to the chatbot, was published on Keller-Sutter’s feed. A spokesperson for the minister told Politico that the post was not “a contribution protected by freedom of expression or part of the political debate, but rather a pure denigration of a woman.” The spokesperson added: “One must fundamentally defend oneself against such misogynistic statements.”
Keller-Sutter is no minor political figure. She heads the Federal Finance Department and is one of seven members of the Swiss Federal Council, the country’s highest executive authority. In 2025, she served as president of the Swiss Confederation, a role that rotates annually among the council members. Before entering federal politics, she studied political science in London and Montreal, served as a cantonal justice minister, and presided over the Council of States. Her decision to file criminal charges rather than simply delete the post signals an intent to test whether Swiss defamation law, which criminalises both defamation under Article 173 and slander under Article 174 of the penal code, can reach the operators of AI systems and the platforms that host them. The legal question at the heart of the complaint is whether social media companies and their operators, in addition to individual users, can be held criminally liable for content generated by their own AI tools.
That question has not been answered anywhere in the world, but courts are beginning to confront it. In the United States, conservative activist Robby Starbuck sued Meta in 2025 after its AI falsely linked him to the January 6 Capitol riot; Meta settled rather than litigate. A Georgia court dismissed a separate defamation case against OpenAI after ChatGPT fabricated claims about a radio host, ruling that the legal threshold for fault had not been met. No AI defamation case has reached a final judgment in any jurisdiction. Keller-Sutter’s complaint, filed under a criminal rather than civil framework and in a country whose defamation statute carries prison sentences of up to three years for deliberate slander, could establish the first binding precedent on AI platform liability for generated speech.
The filing arrives against the backdrop of what has become the most sustained regulatory crisis in Grok’s brief existence. Between 29 December 2025 and 8 January 2026, Grok’s image-generation tools created more than three million sexualised images, approximately 23,000 of which depicted minors, according to the Centre for Countering Digital Hate. The discovery triggered a cascade of legal and regulatory actions that has not stopped. On 2 January, French ministers reported the content to prosecutors, calling it “manifestly illegal.” On 12 January, the United Kingdom’s Ofcom opened a formal investigation into whether X had complied with the Online Safety Act, with potential penalties of up to £18 million or 10 per cent of global revenue. On 14 January, California’s attorney general announced a state investigation into whether xAI had violated California law. On 26 January, the European Commission opened a probe under the Digital Services Act into whether Grok’s deployment met the platform’s legal obligations regarding illegal content and harm to minors.
Advertisement
The enforcement actions escalated sharply in February. On 3 February, French prosecutors, accompanied by a cybercrime unit and Europol officers, raided X’s Paris offices. The investigation, originally opened over complaints about platform operation and data extraction, had widened to include charges of complicity in distributing child sexual abuse material, creating sexually explicit deepfakes, and Holocaust denial. Prosecutors have since summoned Musk and X’s former chief executive Linda Yaccarino for voluntary interviews on 20 April. A Dutch court separately ordered Grok banned from generating non-consensual intimate images. The EU had already fined X €120 million in December 2025 for violating the DSA’s transparency requirements, a penalty X is nowchallenging in what has become the first court test of the bloc’s landmark digital regulation.
In the United States, three Tennessee teenagers filed a class-action lawsuit against xAI on 16 March, alleging that Grok had been used to create sexualised images of them without their knowledge or consent. The images were reportedly shared on Discord and other platforms. On 25 March, Baltimore became the first American city to sue xAI over Grok-generated deepfake pornography, alleging violations of consumer protection law. A separate class action, filed by Lieff Cabraser Heimann & Bernstein, alleges that xAI knowingly designed and profited from an image generator used to produce and distribute child sexual abuse material while refusing to implement the content-safety measures adopted by every other major AI company.
The governance vacuum at xAI compounds the legal exposure.All 11 of xAI’s original co-founders have now departed the company, including researchers recruited from Google DeepMind, Google Brain, and Microsoft Research. Musk said in March that xAI was “not built right the first time around” and needed to be rebuilt from its foundations. The company was absorbed into SpaceX in February throughan all-stock merger that raised immediate governance questions, creating a combined entity valued at $1.25 trillion that is now preparing for what would be the largest initial public offering in history. The regulatory and litigation risks surrounding Grok are, in effect, now embedded in the prospectus of a company seeking a $1.75 trillion public valuation.
What makes Keller-Sutter’s complaint distinct from the deepfake and CSAM cases is its simplicity. It does not involve image generation, undressing algorithms, or child exploitation. It involves a chatbot that was asked to insult a named public official and did so in language that, under Swiss law, constitutes a criminal offence. The factual question is narrow: who is responsible when an AI system, operating on a commercial platform, generates defamatory speech at a user’s request? If the user cannot be identified, does liability pass to the platform operator, to the AI developer, or to no one at all?
Advertisement
The answer to that question will shapethe trajectory of AI governancefar beyond Switzerland. Every major AI company operates chatbots capable of producing defamatory, abusive, or factually false statements about real people. Most have implemented guardrails designed to refuse such requests. Grok, by deliberate design, has operated with fewer restrictions than its competitors, a positioning Musk has marketed as a commitment to free expression. The Keller-Sutter case tests whether that positioning can survive contact with criminal law.
Switzerland is not the European Union and is not bound by the DSA. But Swiss defamation law is among the most stringent in Europe, and a criminal finding against an AI platform operator would reverberate through every jurisdiction currently weighing similar questions. The case is small in scope, involving a single post on a single platform about a single official. But the principle it seeks to establish, that the companies building these systems bearthe kind of legal responsibility that the age of AI governance demands, is anything but small. If Grok can be prompted to defame a former president with impunity, the question is not what it says about the technology. It is what it says about the law.
We’ve seen our fair share of audiophile tomfoolery here at Hackaday, and we’ve even poked fun at a few of them over the years. Perhaps one of the most outrageously over the top that we’ve so far seen comes from [Pierogi Engineering] who, we’ll grant you not in a spirit of audiophile expectation, has made a set of speaker interconnects using liquid mercury.
In terms of construction they’re transparent tubes filled with mercury and capped off with 4 mm plugs as you might expect. We hear them compared with copper cables and from where we’re sitting we can’t tell any difference, but as we’ve said in the past, the only metrics that matter in this field come from an audio analyzer.
But that’s not what we take away from the video below the break. Being honest for a minute, there was a discussion among Hackaday editors as to whether or not we should feature this story. He’s handling significant quantities of mercury, and it’s probably not over reacting to express concerns about his procedures. We wouldn’t handle mercury like that, and we’d suggest that unless you want to turn your home into a Superfund site, you shouldn’t either. But now someone has, so at lease there’s no need for anyone else to answer the question as to whether mercury makes a good interconnect.
Drone technology has changed the face of combat, especially for missions that require both precision and stealth. In fact, one cutting-edge drone can shoot down an enemy jet without ever seeing it. Drone engine technology may be changing as well, thanks to Honeywell Aerospace. The company won a contract from the U.S. Air Force to build a new propulsion system, which is expected to be more advanced than anything currently in use.
The new engine will take cues from Honeywell’s small-thrust-class SkyShot 1600 engine. The SkyShot is a compact and flexible engine built for unmanned military aircraft. It’s a versatile system, capable of working as either a turbojet or turbofan, while also delivering thrust between 800 and 2,800 pounds. The design can be modified to allow for even higher output if needed. The engine is built to handle high G-forces, giving Air Force drones the ability to track and catch fast-moving targets.
Honeywell plans to use digital modeling for faster design, which also speeds up the performance evaluation stage. Because of this, development and manufacturing timelines are expected to shorten. Honeywell will be able to deliver the new propulsion system in a quicker timeframe. This approach allows for a smoother integration with other aircraft systems and helps improve manufacturing efficiency while making the supply chain stronger.
Advertisement
How Honeywell technology supports unmanned aircraft
Honeywell Aerospace is an established player in the world of military drone technology, and their systems are used in a number of unmanned aircraft. That includes the fast and expensive MQ-9 Reaper, a commonly used combat drone. These systems include avionics and other tech that support flight operations and aircraft capability. The engine Honeywell built for the Reaper is the TPE-331, a turboprop that was initially designed in 1959.
Advertisement
Honeywell also designed and produced onboard systems for the Boeing MQ-25 Stingray, an unmanned aircraft used by U.S. Navy carriers to refuel planes while in flight. The Stingray’s introduction is just one of the big changes to hit the U.S. military’s fleet in 2025. In addition to designing crucial systems, Honeywell specializes in a variety of drone components, from flight controls to mission computers, radar, and more.
Thanks to an agreement with the U.S. government, Honeywell will begin increasing production of military components and related defense systems. The announcement was made in March of 2026 and though drones weren’t specifically mentioned, the technologies referenced are regularly used in modern unmanned aircraft. Those technologies include actuators, navigation systems, and combat-ready electronic devices.
Photo credit: Wall Street Journal The Wall Street Journal recently got a rare look inside Apple Park as part of the company’s 50th anniversary celebrations, with reporters joining Tim Cook for a walk through an archive that Cook himself admitted he had barely visited until preparations for the milestone began pulling decades of stored material back into the light.
The first thing that caught his eye was Apple’s original patent filing for the Apple II, a single document that Cook said effectively opened the floodgates for what eventually became more than 140,000 patent applications. A small drawing on a piece of paper that quietly set the direction for everything that followed.
MIGHT TAKES FLIGHT — MacBook Air with the M5 chip packs blazing speed and powerful AI capabilities into an incredibly portable design. With Apple…
SUPERCHARGED BY M5 — With its faster CPU and unified memory, the M5 chip delivers even more performance and fluidity across apps, making…
APPLE INTELLIGENCE — Apple Intelligence is the personal intelligence system that helps you write, express yourself, and get things done…
An early 2001 iPod prototype came next, and Cook recalled the feeling of holding it for the first time a few years after joining the company. The idea of carrying a thousand songs in your pocket felt genuinely unbelievable at a moment when most people were still rotating five CD changers on road trips. He remembered loading a Beatles song the moment he got his hands on one and how that little white device changed his daily commute.
The 2007 iPhone launch remains Cook’s favorite moment in the company’s history, and a circuit board from one of the first working prototypes sitting on the table illustrated just how far the engineering team had to travel to get there. It looked more like a cutting board than something destined for a pocket, an early proof of concept that needed everything working together before the whole thing could be miniaturized. Cook noted that even inside Apple, employees were walking around with early models watching keys and coins scratch the plastic casing. Steve Jobs made the call to switch to glass within a matter of months, a timeline Cook described as close to impossible, comparing it to trying to land on the moon between January and June.
Cook touched on projects that never made it, framing each one as something the team learned from before showing up the next morning and getting back to work. That steadiness, he suggested, is what carried the company through five decades of setbacks and breakthroughs alike. An early Apple Watch prototype rounded out the tour, and Cook’s attention shifted forward, pointing to the combination of hardware, software, and services as the space where the next significant leap is most likely to come from.
NASA is going back to the Moon! We’ll follow the crew of Artemis II every step of the way.
Day 1 – Liftoff!
After resolving a last-minute communications issue with the Flight Termination System (FTS), the Artemis II Space Launch System (SLS) rocket lifted off from Launch Complex 39B at NASA’s Kennedy Space Center in Florida at 6:35 PM EDT.
Main engine cutoff (MECO) for the SLS rocket occurred at 6:43 PM, placing the Orion spacecraft and crew members Reid Wiseman, Victor Glover, Christina Koch, and Jeremy Hansen safely into orbit around the Earth. Just before 7:00 PM, all four solar array “wings” were successfully deployed from the European Service Module.
Advertisement
The next major milestones are the perigee and apogee raise maneuvers — two engine burns which will put the Orion spacecraft into a higher orbit, necessary for the eventual trans-lunar injection (TLI) burn which will put the vehicle on course for the Moon.
April is a strong month for horror with some of the biggest franchises and originals available to watch from the comfort of your living room. The month is typically associated with pranks and comedies, but if you want something more macabre, I’ve got you covered.
Here are my 7 top horror picks arriving across streaming services this April.
Advertisement
Article continues below
Alien
Alien Trailer HD (Original 1979 Ridley Scott Film) Sigourney Weaver – YouTube
When: April 1 Where: HBO Max (US); Disney+ (UK, AU)
Ridley Scott’s iconic sci-fi horror Alien is streaming throughout April, if you want to revisit one of the greats. And if you haven’t seen this masterpiece of a movie, now is the perfect time.
Advertisement
Alien is well-loved for its groundbreaking effects in the 70s, its iconic Xenomorph creature design, and the atmospheric tension that builds throughout. Other Alien movies can also be found on HBO Max and Disney+, but you really can’t beat the first one, even if some people do think Aliens was better!
Sign up for breaking news, reviews, opinion, top tech deals, and more.
2025’s Deathstalker is a remake of the 1983 movie of the same name. Those looking for dark fantasy won’t want to miss this addition to Shudder’s library, as an alternative to some of the more modern horror movies it offers.
Advertisement
Daniel Bernhardt and Patton Oswalt lead the cast of the remake, which follows a powerful swordsman known as Deathstalker after he recovers a cursed amulet from a corpse-strewn battlefield. When he’s marked by dark magic and hunted by monstrous assassins, he must face the rising evil and break the curse before it’s too late.
Five Nights at Freddy’s 2
Five Nights at Freddy’s 2 | Official Trailer – YouTube
When: April 3 Where: Peacock (US); rent or buy (AU)
Are you ready for Freddy? The sequel arrives on Peacock in April, following a successful box office run. Despite being panned critically, Freddy Fazbear and friends continue to have a dedicated fanbase, so if you’re part of that, you’ll be happy to know it’s coming to streaming.
Advertisement
The adaptation of the successful horror game is set a year and a half after the previous movie, where we follow young Abby Schmidt as she gets manipulated by the Marionette, an animatronic from the original Freddy Fazbear’s Pizza restaurant, who wants revenge against her parents. The Marionette is one of the creepiest figures in the games, and now you get to see it come to life on film.
Earwig
EARWIG | Official Trailer | Now showing on MUBI – YouTube
Earwig is a strange movie, but when you’re a horror fan, that’s often a compliment. Set in a bleak post-war Europe, we follow a middle-aged man, Albert, as he cares for a young girl named Mia, who has no teeth.
Every day, he makes her new dentures out of ice, and one day, he’s told by a mysterious voice to prepare Mia for the outside world, where she has never been. Described as both a melodrama and a body horror, it’s a disturbing movie that may divide fans, but I can certainly say it’s stuck with me for a while.
When: April 10 Where: Netflix (US); Paramount+ (UK); rent or buy (AU)
2022’s Scream is the fifth entry into the slasher franchise, and why it wasn’t just called Scream 5 continues to baffle me. Anyway, don’t let that deter you; it is a very strong movie and one of my favorites in the series.
Despite the name, it’s not a remake; instead, it focuses on a new core cast of characters, though original stars like Courteney Cox, David Arquette, and Neve Campbell reprise their roles.
Advertisement
A Quiet Place Part II
A Quiet Place Part II (2021) – Final Trailer – Paramount Pictures – YouTube
When: April 11 Where: Netflix (US); Paramount+ (UK); rent or buy (AU)
Ahead of A Quiet Place Part III, which is due next year, why not catch up with the second in the successful horror series? It’s arriving on Netflix for US audiences, while UK audiences can watch on Paramount+.
A Quiet Place Part II continues to focus on the Abbott family (except for John Kransinski’s Lee) as they try to survive in a post-apocalyptic world inhabited by blind aliens with an acute sense of hearing, so it’s critical that they monitor how much noise they make. Horror doesn’t get much more tense than this.
Dolly
Dolly – Official Trailer (2026) Fabianne Therese, Seann William Scott, and Max the Impaler. – YouTube
Finally, at the end of April, we have Dolly. Creepy dolls are a staple in the horror genre, just look at Annabelle and Chucky, but this movie has got me creeped out by the synopsis alone.
Advertisement
Terror strikes when Macy and her boyfriend Chase are attacked while camping, and Macy is abducted by a tall, menacing figure who treats her as if she were a living doll. NWA wrestler Max the Impaler plays said figure, making it their movie debut.
You know what they say — you can’t keep a good website down. OldVersion.com, the repository of outdated software that has been serving up old versions of tools you need for the last twenty-five years, is not going away as we reported last year. Not only is it sticking around, it’s gotten a retro facelift inspired by Windows 3.1 or OS/2. Mostly Windows, given the screensaver, but we’ll let you find that for yourself.
We’re thrilled to see that OldVersion has gotten the support they need to keep going after running into financial troubles. According to founder Alex Levine, some of that support came as a result of the Hackaday article reporting on the then-upcoming closure, so kudos to you guys for stepping up.
While we absolutely love the retro redesign of the new website, that’s one thing notably lacking — an obvious donation button. Well, that and old-school HTTP support so you can get on with your retromachines, but that, at least, is in the works according to the site roadmap. It’s a little weird that in this year of the common era 2026 you have to do extra work to give up on HTTPS functionality, but it is the way it is.
In the meantime, the site is fully usable as long as you have HTTPS capability, or go through a proxy. Perhaps you could use this ESP8266 code to get started making one, if you don’t want to embarrass your old computer by using something more powerful than it as a pass-through.
Advertisement
Speaking of proxies, if old versions of software aren’t enough for you, how about an old version of the internet? We heard you like old versions, so you can visit an old version of OldVersion!
Note that if you’re reading this after 01/04/2026, the look-and-feel of OldVersion.com may not match what’s depicted here.
SpaceX is looking to the heavens for its upcoming initial public offering based on a $1.75 trillion valuation, according to confidential paperwork filed with the US Securities and Exchange Commission.
As reported by Bloomberg, the draft IPO registration is the first step toward a possible June offering that could raise approximately $75 billion. The filing allows the company to get feedback from the SEC before the information is released publicly.
The IPO may be open to more people than just the wealthiest investors. According to a report by The Motley Fool, SpaceX plans to allocate around 30% of the initial shares to “retail investors,” meaning individual investors. Normal retail allocation tends to be around 10% of shares.
Advertisement
A SpaceX representative didn’t immediately respond to a request for comment.
Why a SpaceX IPO is a big deal
Spaceflight is an incredibly expensive endeavor; SpaceX gets billions of dollars from the US government to launch satellites and help keep NASA’s programs running. Almost a year ago, the company set a target of launching every other day through the end of 2025 and ended up launching a record 165 orbital flights.
But SpaceX is no longer just a high-flying rocket company. Its Starlink division provides data access to homes, remote locations, airlines and direct to many mobile phones in areas where there’s no cellular coverage. It also recently acquired xAI, another of Elon Musk’s companies, and owns the social media site X (formerly Twitter).
It’s the AI angle that seems to be driving up the company’s valuation ahead of the IPO. The xAI all-stock acquisition valued the company and SpaceX at $1.25 trillion. This year, OpenAI and Anthropic PBC are also expected to go public.
Advertisement
Although those numbers are eye-popping, the company has plenty of challenges before it can get off the launchpad.
Starlink has announced a plan to send up new V3 third-generation satellites that should bring gigabit internet speeds to its network, but those won’t be ready until 2027. Getting them up requires SpaceX’s heavy-duty Spacecraft vehicle, which has had limited success in testing so far. In the meantime, its current Starlink satellites have been exploding in orbit as recently as this week.
And for xAI, the skies aren’t exactly clear despite the current fervor for all things AI. Musk announced in mid-March that “xAI was not built right first time around, so is being rebuilt from the foundations up.” And the company is being sued by three teen girls and their guardians for “devastating” harm caused by its Grok AI generating child sexual abuse images.
You must be logged in to post a comment Login