The bundle includes Mario Kart World and Donkey Kong Bananza, two exclusive Switch 2 titles that showcase the fun ready to be had with the console this President’s Day. I’ve bought and played both games (not for $80), and I can say that they serve as perfect introductions to the Switch ecosystem, especially if you’ve not played many Nintendo games.
While I still own my Lenovo Legion Go S SteamOS handheld, ready to be used for a bigger library of games, the Nintendo Switch 2 is the ideal device if you don’t always want to tweak power settings or graphics settings in games. With Nvidia‘s DLSS enabled on the custom T239 processor, games are surprisingly smooth to play, with consistent frame rates, and that’s a big difference compared to the Nintendo Switch.
Advertisement
A price hike might be close
(Image credit: Future / Isaiah Williams)
It’s not a big discount, but the Nintendo Switch 2 is another game console that’s at risk of a potential price increase. Multiple reports suggest the threat is from a combination of tariffs and the ongoing RAM crisis.
Since Valve is also hesitant to announce prices for the Steam Machine’s launch due to the high cost of memory, Nintendo will likely announce a price increase for its Switch 2 soon.
That makes now the best time to make a move if you’ve been considering a Switch 2 – and I can attest to it being worth every cent, especially since its life cycle has only just begun. Expect to see several exclusives launch later down the line, and The Duskbloods is already one that’s caught my eye.
Since the middle of last year, there have been at least three major AI “acqui-hires” in Silicon Valley. Meta invested more than $14 billion in Scale AI and brought on its CEO, Alexandr Wang; Google spent a cool $2.4 billion to license Windsurf’s technology and fold its cofounders and research teams into DeepMind; and Nvidia wagered $20 billion on Groq’s inference technology and hired its CEO and other staffers.
The frontier AI labs, meanwhile, have been playing a high stakes and seemingly never-ending game of talent musical chairs. The latest reshuffle began three weeks ago, when OpenAI announced it was rehiring several researchers who had departed less than two years earlier to join Mira Murati’s startup, Thinking Machines. At the same time, Anthropic, which was itself founded by former OpenAI staffers, has been poaching talent from the ChatGPT maker. OpenAI, in turn, just hired a former Anthropic safety researcher to be its “head of preparedness.”
The hiring churn happening in Silicon Valley represents the “great unbundling” of the tech startup, as Dave Munichiello, an investor at GV, put it. In earlier eras, tech founders and their first employees often stayed onboard until either the lights went out or there was a major liquidity event. But in today’s market, where generative AI startups are growing rapidly, equipped with plenty of capital, and prized especially for the strength of their research talent, “you invest in a startup knowing it could be broken up,” Munichiello told me.
Early founders and researchers at the buzziest AI startups are bouncing around to different companies for a range of reasons. A big incentive for many, of course, is money. Last year Meta was reportedly offering top AI researchers compensation packages in the tens or hundreds of millions of dollars, offering them not just access to cutting-edge computing resources but also … generational wealth.
Advertisement
But it’s not all about getting rich. Broader cultural shifts that rocked the tech industry in recent years have made some workers worried about committing to one company or institution for too long, says Sayash Kapoor, a computer science researcher at Princeton University and a senior fellow at Mozilla. Employers used to safely assume that workers would stay at least until the four-year mark when their stock options were typically scheduled to vest. In the high-minded era of the 2000s and 2010s, plenty of early cofounders and employees also sincerely believed in the stated missions of their companies and wanted to be there to help achieve them.
Now, Kapoor says, “people understand the limitations of the institutions they’re working in, and founders are more pragmatic.” The founders of Windsurf, for example, may have calculated their impact could be larger at a place like Google that has lots of resources, Kapoor says. He adds that a similar shift is happening within academia. Over the past five years, Kapoor says, he’s seen more PhD researchers leave their computer-science doctoral programs to take jobs in industry. There are higher opportunity costs associated with staying in one place at a time when AI innovation is rapidly accelerating, he says.
Investors, wary of becoming collateral damage in the AI talent wars, are taking steps to protect themselves. Max Gazor, the founder of Striker Venture Partners, says his team is vetting founding teams “for chemistry and cohesion more than ever.” Gazor says it’s also increasingly common for deals to include “protective provisions that require board consent for material IP licensing or similar scenarios.”
Gazor notes that some of the biggest acqui-hire deals that have happened recently involved startups founded long before the current generative AI boom. Scale AI, for example, was founded in 2016, a time when the kind of deal Wang negotiated with Meta would have been unfathomable to many. Now, however, these potential outcomes might be considered in early term sheets and “constructively managed,” Gazor explains.
If you have a certain nostalgia for the warmth and saturation of the sound of cassettes, the We Are Rewind WE-001 is a fine choice. It looks lovely in the bright orange colourway, plus modern conveniences such as a USB-C rechargeable battery and Bluetooth connectivity are welcome to bring it into the 21st century. Granted, it doesn’t sound as great as modern streaming does, but then again, that’s not the point, and if you’re considering the WE-001, you know that already.
Hefty aluminium chassis
Bluetooth pairing is easy, and works decently well
The warmth and saturated feel of a cassette has a strange appeal
Not the most portable of players
No auto-stop function is a shame
Key Features
It plays cassettes
A modern way of playing albums, or your old mixtapes
Bluetooth 5.1
Advertisement
You can wirelessly stream to a set of headphones or a speaker
Introduction
The We Are Rewind WE-001 seeks to provide a modern outlet for the resurgence of cassettes that’s currently happening.
In the UK, cassette sales soared to a 20-year high in 2025, with a total sales volume of 164,000 units, making now a good time to pick up a cassette player to play them on.
There are folks who have modernised the cassette player with features such as a built-in battery, USB-C charging and even Bluetooth connectivity. French firm We Are Rewind are one of the frontrunners with this fetching WE-001 model.
Advertisement
Advertisement
At £129 / $159, the WE-001 has a reasonable cost to it, and draws two immediate comparisons. For one, it sits close to what you’d pay for a second-hand Walkman; and secondly, it’s not too far off capable budget music players, including the five-star FiiO JM21 and HiBy R3 II.
Is this player from We Are Rewind a novelty item? I’ve dug out some cassettes and put it through its paces to find out.
Design
Hefty aluminium frame
Fetching orange colourway
Tactile controls
The WE-001 is a bit big in weight and size, tipping the scales at 404g, and significantly bigger than the last run of cassette players when the likes of Sony were manufacturing them.
Part of the reason this We Are Rewind player is so large is that there’s only one real cassette mechanism available these days from one supplier, and if you want to build a modern cassette player, it’s the only one to use.
Advertisement
Advertisement
Image Credit (Trusted Reviews)
Another reason it’s as big as it is because the design of this orange lad is based on Sony’s first-ever Walkman cassette player – the TPS-L2 – from 1979. Where Sony’s was plastic, the WE-001 is aluminium, contributing both to a quality feel in-hand and to its heavyweight nature.
There’s a pleasant We Are Rewind logo on the front, plus a small circular window in the door that you have to manually open from the top. The old Sony it’s based on had a rectangular window, for what it’s worth. On the top side are the cassette player’s controls, which are oddly the wrong way around as you look at the player, as indicated by the play button triangle facing to the right, so you have to turn the unit around.
Image Credit (Trusted Reviews)
The controls from left to right in the correct orientation are as follows – battery and Bluetooth pairing indicator LEDs, a Bluetooth pairing button, a yellow record button, play, rewind, fast forward and stop. The buttons aren’t soft-touch and have a pleasant tactile finish when pressed
The right side houses the WE-001’s ports, with a USB-C port for charging (charging only, and no power for an external USB-C DAC such as the iFi Go Link Max for listening to higher power wired headphones), two 3.5mm headphone jacks – one for recording and the other for listening, plus a volume wheel. The rear of the unit also has a small hole for motor speed adjustment.
Advertisement
Advertisement
Image Credit (Trusted Reviews)
It looks fetching in the ‘Serge’ orange colour (named for Serge Gainsbourg) I have, although is available in ‘Kurt’ (Cobain – light blue) and ‘Keith’ (Richards – black) colours if you prefer, plus special editions for Elvis and Duran Duran in more recent times.
Specification
Bluetooth 5.1 support with no specific codec mentioned
Headphone jack is suitable for easy-to-drive, low impedance cans
Reasonable battery life
The WE-001’s spec sheet is threadbare, but there are some things worth talking about. It works with all kinds of cassettes, with Type I through IV all supported. Any tapes you have should be okay, all being well.
The cassette player features a 30Hz to 12500Hz frequency range, and supports Bluetooth 5.1. No specific codec support is listed whether it’s SBC or AAC, or even something more advanced, such as aptX HD. Quite frankly, the fact that it supports Bluetooth in any guise seems like a bit of a novelty, but it paired okay with both my Focal Bathys and Audio Pro C10 MKII during testing.
Image Credit (Trusted Reviews)
Advertisement
The spec sheet is also very specific about this cassette player not being ‘compatible’ with earbuds, although I also used a pair of FiiO FH19 IEMs with the WE-001 and they worked fine. Maybe it goes against We Are Rewind’s advice, but I am a rebel at heart.
The output of the headphone jack isn’t powerful, with 2mW per channel into 32 ohms, making this We Are Rewind cassette player suitable for easy-to-drive headphones, rather than more difficult ones that would usually need their own amp or DAC.
That being said, I did try to use a set of Drop x Sennheiser HD6XXs for a reference point, and they worked – I just had to turn the volume up a lot to get to a listenable volume. You’re better off going for easier-to-drive ones for a more optimal experience, though.
Advertisement
Image Credit (Trusted Reviews)
As opposed to running on AA batteries as cassette players of old did, the WE-001 features its own built-in lithium ion battery that’s rechargeable. It has a 2000mAh capacity, and We Are Rewind quotes it for between ten and 12 hours on a charge. In my testing, that seems about right.
Performance
Warmer tones than streaming
Compression and saturation seem part of the experience
Auto-stop would have been nice
Advertisement
Normally, when reviewing audio gear and such, it can be very easy to be analytical and scientific as to how a product sounds, with lots of jargon thrown around. I’m as guilty of that as the next man.
With the WE-001, though, it felt right to take a different approach for a key reason. Cassettes weren’t the be-all-and-end-all of fidelity when they were new, and using them again in 2026 is more of an experiential undertaking than a scientific one. Therefore, it warrants a different kind of perspective.
For testing, I took an album I know like the back of my hand (and ironically one of the only cassettes I own), an Abbey Road real-time cassette copy of Marillion’s Afraid of Sunlight that I suspect hasn’t been played for thirty years since the album’s release, and listened to it all the way through on the WE-001 on my Focal Bathys via the 3.5mm jack. I then listened to the original 1995 mix using my Honor Magic V3 and an iFi Go Link Max I had lying around over Tidal for a ‘modern’ equivalent.
Image Credit (Trusted Reviews)
Naturally, listening to it in digital results in a cleaner, more expansive recording than the analogue form a cassette takes. The apparent benefit of listening to cassette are the very shortcomings that caused us to move to hi-res digital music when the option became available – the warmth and saturation of an analogue sound, complete with some very slight tape hiss in the background.
It’s a completely different listening experience with Afraid of Sunlight on cassette to even CD or a streaming version due to the warmth of the medium. Granted, the soundstage isn’t too wide, and things can sound quite congested against listening on digital means, but there is a strange appeal to it.
Advertisement
Advertisement
The WE-001 is quite a meaty player in itself, with its tuning prioritising some low end, even though it only goes down to 30Hz. The potent bass line on Cannibal Surf Babe and the gritty guitar lines are quite prominent throughout the track, suit the way this player sounds, and there was a strange satisfaction to listen to it in this compressed means. Maybe it’s the weird nostalgia for a medium I didn’t experience first time around talking, but it’s an interesting point.
Image Credit (Trusted Reviews)
It isn’t a fatiguing listen either, arguably due to some treble elements, such as cymbal hits and piano notes being smoothed over. This tape player doesn’t have any form of Dolby Noise Reduction built in, although I didn’t necessarily hear much in the way of tape hiss or speed variations with the Afraid of Sunlight tape, or another Abbey Road tape of Fish’s Vigil in a Wilderness of Mirrors.
What listening to cassettes in the way they were intended does is revive the linearity of music that I think has been lost with the age of instant access provided by streaming. If you went out and bought an album, that’s what you listened to, rather than getting bored with it halfway through and finding something else.
In addition, you purchased and owned an album for one price, and that’s what you had to listen to, rather than paying a monthly subscription fee for thousands of albums that you’re paying for the chance to listen to, as long as you keep payments rolling. Therefore, you were more inclined to listen to the full duration of what you’d spent hard-earned money on, which I think is slowly becoming a lost art. It’s nice to be reminded of that experience.
Advertisement
Image Credit (Trusted Reviews)
Advertisement
Maybe it’s because of my older-leaning tastes, but as much as I have a ridiculous playlist of several thousand songs on Spotify and Tidal that gets put on every so often, I do listen to music in album-sized chunks, which I know a lot of folks my age don’t. That workflow didn’t necessarily change when using this We Are Rewind player in my experience, but it just provided more of an analogue feel that I can partially see the appeal of.
One small nitpick I have with the WE-001 is the lack of an auto-stop for either fast-forwarding or rewinding when the cassette has reached either end of its tape, leading to a horrible motor whine from the internal mechanism. This would have been helpful for quality-of-life purposes and for safeguarding your tapes.
Should you buy it?
You want a feature-rich, portable cassette deck in 2026
Advertisement
The We Are Rewind WE-001 is one of a handful of cassette players designed for the modern age, and I’d argue it’s the most stylish and convenient.
Advertisement
You don’t necessarily have any reason to listen to cassettes
You obviously need some form of reason for going back and listening to cassettes, and even if you’re curious, I’d probably still stick to other means for absolute fidelity if it were me.
Advertisement
Final Thoughts
If you have a certain nostalgia for the warmth and saturation of the sound of cassettes, the We Are Rewind WE-001 is a fine choice. It looks lovely in the bright orange colourway, plus modern conveniences such as a USB-C rechargeable battery and Bluetooth connectivity are welcome to bring it into the 21st century.
Granted, it doesn’t sound as great as modern streaming does, but then again, that’s not the point, and if you’re considering the WE-001, you know that already.
Similar money can buy you some lovely digital audio players and potent headphone outputs, such as the FiiO JM21 and HiBy R3 II, both of which I own and use on a daily basis for having such a large local music library, and having a separate device for listening to music that won’t be interrupted by an onslaught of notifications.
But if you want the fun and nostalgia of a cassette above all and the freedom to connect it to modern equipment, this is a fun device above all. And we do need some more of that.
Advertisement
How We Test
I tested the We Are Rewind WE-001 for a week, listening to a selection of cassette albums and comparing to the modern equivalent. I used a range of headphones, over- and in-ear, and connected the player to a Bluetooth speaker to judge playback quality.
Tested for a week
Tested with real-world use
Advertisement
FAQs
What types of cassettes does the We Are Rewind WE-001 support?
The We Are Rewind WE-001 supports all types of cassette, with Type I through IV all supported.
New chips, not new designs, will define Apple’s next entry-level iPad and iPhone 17e as the company advances its entry-level lineup incrementally.
The iPhone 17e may well look exactly like its predecessor
A report from MacOtakara published on February 6 says Apple plans to keep the current designs for both devices while upgrading their processors. The report reinforces a familiar pattern in Apple’s lineup, where entry-level models advance through internal improvements rather than visible redesigns. MacOtakara is a long-running Apple rumor site with solid supply-chain access and a track record that, while mixed, is generally reliable. Rumor Score: 🤔 Possible Continue Reading on AppleInsider | Discuss on our Forums
If you’re in need of a budget-friendly pair of wireless earbuds that don’t skimp on features then look no further than the Pixel Buds 2a.
These wireless earbuds Pixel Buds 2a were designed with one specific goal in mind: to provide a compact, lightweight pair of earbuds that you can adjust to suit your needs.
For instance, despite the Pixel Buds 2a weighing a mere 4.3g each, Google’s included active noise cancellation within the design to filter out any unwanted background noise in favour of what you’re hearing through the buds.
Advertisement
Advertisement
The ANC is surprisingly effective for such a budget device, but it can be evened out to meet your preferences.
The buds themselves have a lightweight and comfortable fit, easily slipping into the contours of your ears . They even boast a degree of water resistance so you can feel confident in wearing them in light showers or during a tough workout.
You can even use a Pixel phone to can get real-time Google Assistant updates on your commute, or have some relaxing music to listen to when you’re trying to drown out the sound of London Underground.
And because these pods integrate with Google’s Gemini voice assistant, you don’t need to reach for your phone when controlling the Pixel Buds 2a either. Want to change the song? Just ask Gemini to do it. Need to turn up the volume? Again, Gemini has you covered.
Battery life is also competitive here, with up to 20-hours when ANC is engaged, so you’ll always be set to get you through a day’s work.
Advertisement
They might not have quite the same level of feature parity with Apple’s AirPods, but for what the Pixel Buds 2a offer, they’re a better buy for users of the Google ecosystem.
On Wednesday, OpenAI CEO Sam Altman and Chief Marketing Officer Kate Rouch complained on X after rival AI lab Anthropic released four commercials, two of which will run during the Super Bowl on Sunday, mocking the idea of including ads in AI chatbot conversations. Anthropic’s campaign seemingly touched a nerve at OpenAI just weeks after the ChatGPT maker began testing ads in a lower-cost tier of its chatbot.
Altman called Anthropic’s ads “clearly dishonest,” accused the company of being “authoritarian,” and said it “serves an expensive product to rich people,” while Rouch wrote, “Real betrayal isn’t ads. It’s control.”
Anthropic’s four commercials, part of a campaign called “A Time and a Place,” each open with a single word splashed across the screen: “Betrayal,” “Violation,” “Deception,” and “Treachery.” They depict scenarios where a person asks a human stand-in for an AI chatbot for personal advice, only to get blindsided by a product pitch.
Advertisement
Anthropic’s 2026 Super Bowl commercial.
In one spot, a man asks a therapist-style chatbot (a woman sitting in a chair) how to communicate better with his mom. The bot offers a few suggestions, then pivots to promoting a fictional cougar-dating site called Golden Encounters.
In another spot, a skinny man looking for fitness tips instead gets served an ad for height-boosting insoles. Each ad ends with the tagline: “Ads are coming to AI. But not to Claude.” Anthropic plans to air a 30-second version during Super Bowl LX, with a 60-second cut running in the pregame, according to CNBC.
Advertisement
In the X posts, the OpenAI executives argue that these commercials are misleading because the planned ChatGPT ads will appear labeled at the bottom of conversational responses in banners and will not alter the chatbot’s answers.
But there’s a slight twist: OpenAI’s own blog post about its ad plans states that the company will “test ads at the bottom of answers in ChatGPT when there’s a relevant sponsored product or service based on your current conversation,” meaning the ads will be conversation-specific.
The financial backdrop explains some of the tension over ads in chatbots. As Ars previously reported, OpenAI struck more than $1.4 trillion in infrastructure deals in 2025 and expects to burn roughly $9 billion this year while generating about $13 billion in revenue. Only about 5 percent of ChatGPT’s 800 million weekly users pay for subscriptions. Anthropic is also not yet profitable, but it relies on enterprise contracts and paid subscriptions rather than advertising, and it has not taken on infrastructure commitments at the same scale as OpenAI.
In late 2022, when generative AI tools landed in students’ hands, classrooms changed almost overnight. Essays written by algorithms appeared in inboxes. Lesson plans suddenly felt outdated. And across the country, schools asked the same questions: How do we respond — and what comes next?
Some educators saw AI as a threat that enables cheating and undermines traditional teaching. Others viewed it as a transformative tool. But a growing number are charting a different path entirely: teaching students to work with AI critically and creatively while building essential literacy skills.
The challenge isn’t just about introducing new technology. It’s about reimagining what learning looks like when AI is part of the equation. How do teachers create assignments that can’t be easily outsourced to generative AI tools? How do elementary students learn to question AI-generated content? And how do educators integrate these tools without losing sight of creativity, critical thinking and human connection?
Recently, EdSurge spoke with three educators who are tackling these questions head-on: Liz Voci, an instructional technology specialist at an elementary school; Pam Amendola, a high school English teacher who reimagined her Macbeth unit to include AI; and Brandie Wright, who teaches fifth and sixth graders at a microschool, integrating AI into lessons on sustainability.
EdSurge: What led you to integrate AI into your teaching?
Amendola: When OpenAI’s ChatGPT burst onto the scene in November 2022, it upended education and sent teachers scrambling. Students were suddenly using AI to complete assignments. Many students thought, Why should I complete a worksheet when AI can do it for me? Why write a discussion post when AI can do it better and faster?
Advertisement
Our education system was built for an industrial age, but we now live in a technological age where tasks are completed rapidly. Learning at school should be a time of discovery, but education remains stuck in the past. We are in a place I call the in between. In this place, I discovered a need to educate students on AI literacy alongside the themes and structure of the English language.
I reimagined my Macbeth unit to integrate AI with traditional learning methods. I taught Acts I-III using time-tested approaches, building knowledge of both Shakespeare and AI into each act. In Act IV, students recreated their assigned scenes using generative AI to make an original movie. For Act V, they used block-based programming to have robots act out their scenes. My assessment had nothing to do with writing an essay, so it was uncheatable. I encouraged students to work with me to design the lesson so I could determine the best way to help them learn.
Voci: Last fall, I was in a literacy meeting with administrators and teachers where I heard concerns about the new science of reading materials not engaging students’ interest. While the books were highly accessible, students had no interest in reading them. This was my lightbulb moment. If we could use AI tools to develop engaging and accessible reading passages for students, we could also teach foundational AI literacy skills at the same time.
This is where The Perfect Book Project was born. Students work with teachers to develop their own perfect reading book that is both engaging and accessible, learning literary skills alongside how to work with and evaluate AI-generated content. In its pilot, I worked directly with teachers as students conceptualized, drafted, edited and published their books. I spent hundreds of hours creating prompts with content guardrails, accessibility constraints and research-based foundational literacy knowledge to guide students and teachers through the process.
Wright: I’m doing quite a bit of work around the U.N. Sustainable Development Goals, teaching our explorers the impact of our actions not just on ourselves but also on others and the environment. I wanted to see them use AI to deepen their knowledge and serve as a thought partner as they develop solutions to issues like climate change.
I created a lesson called “Investigating Energy Efficiency and Sustainability in Our Spaces.” The explorers went on a sustainability scavenger hunt around campus to find examples of energy-efficient items and sustainable practices. They used AI tools to analyze their findings, interpret and evaluate AI responses for accuracy and potential bias, and reflect on how technology and human decisions work together to create sustainable solutions. The AI in this lesson wasn’t about the tools they used, but more about how AI is viewed in the context of what they are learning.
Advertisement
What shifts in student learning did you observe?
Voci: One eye-opening moment was during my first lesson on hallucinations and bias with a third grade class. After introducing the concepts at a developmentally appropriate level, I had them reread their manuscripts through the lens of an AI hallucination and bias detective. It didn’t take long for the first student to find the first hallucination. There was incorrect scoring in a football game. AI counted a touchdown as one point. One student’s hand flew up; he was so excited to explain to me and the class how the model had incorrectly scored the game.
This discovery lit a fire under the rest of the class to begin looking more closely at every word of their text and not take it at face value. The class went on to find more hallucinations and discover some generalizations that did not represent their intentions.
Wright: I saw the explorers develop their critical thinking as they asked questions about how AI was used, how AI makes its decisions and whether this affects the environment. I truly appreciate that this age group holds onto their creativity and imagination. They don’t want AI to do the creating for them. They still want to draw their own pictures and tell their own stories.
Amendola: It was uncomfortable for my honors students to try something new. They were out of their element and craved the structure of the rubric. I had to let go of traditional grading structures first before I could help them embrace the ambiguity. Their willingness to explore and make mistakes was wonderful. The collaboration helped create a sense of class community that resulted in learning a new skill.
What’s your advice for educators hesitant to explore AI?
Advertisement
Amendola: Don’t be afraid to try new things. Keep in mind that the greatest success first requires a change of mindset. Only then can you open the doors to what generative AI can do for your students.
Voci: Don’t let the fear, weight and speed of AI advancement paralyze you. Find small, intentional steps that are grounded in human-centered values to move forward with your own knowledge, and then find ways to connect your new knowledge to support student learning. In this age of AI, we need to give our fellow educators the same resources, scaffolding and grace.
Wright: Jump in!
Join the movement at https://generationai.org to participate in our ongoing exploration of how we can harness AI’s potential to create more engaging and transformative learning experiences for all students.
X is experimenting with a new way for AI to write Community Notes. The company is testing a new “collaborative notes” feature that allows human writers to request an AI-written Community Note.
It’s not the first time the platform has experimented with AI in Community Notes. The company started a pilot program last year to allow developers to create dedicated AI note writers. But the latest experiment sounds like a more streamlined process.
According to the company, when an existing Community Note contributor requests a note on a post, the request “now also kicks off creation of a Collaborative Note.” Contributors can then rate the note or suggest improvements. “Collaborative Notes can update over time as suggestions and ratings come in,” X says. “When considering an update, the system reviews new input from contributors to make the note as helpful as possible, then decides whether the new version is a meaningful improvement.”
X doesn’t say whether it’s using Grok or another AI tool to actually generate the fact check. If it was using Grok, that would be in-line with how a lot of X users currently invoke the AI on threads with replies like “@grok is this true?”
Advertisement
Community Notes has often been criticized for moving too slowly so adding AI into the mix could help speed up the process of getting notes published. Keith Coleman, who oversees Community Notes at X, wrote in a post that the update also provides “a new way to make models smarter in the process (continuous learning from community feedback).” On the other hand, we don’t have to look very far to find examples of Grok losing touch with reality or worse.
According to X, only Community Note Contributors with a “top writer” status will be able to initiate a collaborative note to start, though it expects to expand availability “over time.”
Amazon has refreshed its Fire TV lineup in the UK, with three new ranges available to buy right now.
The updated Fire TV 2-Series, Fire TV 4-Series, and Fire TV Omni QLED promise slimmer designs, faster performance and smarter picture tech. All of this is aimed at getting you to your shows quicker.
Leading this current crop is the Fire TV Omni QLED, available in 50-, 55- and 65-inch sizes. Amazon says the new panel is 60% brighter than previous models, with double the local dimming zones for punchier highlights and deeper blacks. Dolby Vision and HDR10+ Adaptive are on board. In addition, the TV can automatically adjust colour and brightness based on your room lighting.
The Omni QLED also leans heavily into smart features. OmniSense uses presence detection to wake the TV when you enter the room and power it down when you leave. Meanwhile, Interactive Art reacts to movement, turning the screen into something closer to a living display than a black rectangle on the wall.
Advertisement
Further down the range, the redesigned Fire TV 2-Series and Fire TV 4-Series cover screen sizes from 32 to 55 inches. The 2-Series sticks to HD resolution, while the 4-Series steps up to 4K. Both benefit from ultra-thin bezels and a new quad-core processor that Amazon says makes them 30% faster than before. It’s a modest upgrade on paper. However, it is one that should make everyday navigation feel noticeably snappier.
Advertisement
All three ranges run Fire TV OS, with Amazon continuing to push its content-first approach. It surfaces apps, live TV and recommendations as soon as you turn the screen on.
The new Fire TV models are available now in the UK, with introductory pricing running until 10 February 2026:
Advertisement
With faster internals and a brighter flagship model, Amazon’s latest Fire TVs look like a solid refresh, especially if you’re after a big screen without a premium TV price tag.
For years, we’ve been subjected to an endless parade of hyperventilating claims about the Biden administration’s supposed “censorship industrial complex.” We were told, over and over again, that the government was weaponizing its power to silence conservative speech. The evidence for this? Some angry emails from White House staffers that Facebook ignored. That was basically it. The Supreme Court looked at it and said there was no standing because there was no evidence of coercion (and even suggested that the plaintiffs had fabricated some of the facts, unsupported by reality).
But now we have actual, documented cases of the federal government using its surveillance apparatus to track down and intimidate Americans for nothing more than criticizing government policy. And wouldn’t you know it, the same people who spent years screaming about censorship are suddenly very quiet.
If any of the following stories had happened under the Biden administration, you’d hear screams from the likes of Matt Taibbi, Bari Weiss, and Michael Shellenberger, about the crushing boot of the government trying to silence speech.
But somehow… nothing. Weiss is otherwise occupied—busy stripping CBS News for parts to please King Trump. And the dude bros who invented the “censorship industrial complex” out of their imaginations? Pretty damn quiet about stories like the following.
Advertisement
Taibbi is spending his time trying to play down the Epstein files and claiming Meta blocking ICE apps on direct request from DHS isn’t censorship because he hasn’t seen any evidence that it’s because of the federal government. Dude. Pam Bondi publicly stated she called Meta to have them removed. Shellenberger, who is now somehow a “free speech professor” at Bari Weiss’ collapsing fake university, seems to just be posting non-stop conspiracy theory nonsense from cranks.
Let’s start with the case that should make your blood boil. The Washington Post reports that a 67-year-old retired Philadelphia man — a naturalized U.S. citizen originally from the UK — found himself in the crosshairs of the Department of Homeland Security after he committed the apparently unforgivable sin of… sending a polite email to a government lawyer asking for mercy in a deportation case.
Here’s what he wrote to a prosecutor who was trying to deport an Afghani man who feared the Taliban would take his life if sent there. The Philadelphia resident found the prosecutors email and sent the following:
“Mr. Dernbach, don’t play Russian roulette with H’s life. Err on the side of caution. There’s a reason the US government along with many other governments don’t recognise the Taliban. Apply principles of common sense and decency.”
That’s it. That’s the email that triggered a federal response. Within hours — hours — of sending this email, Google notified him that DHS had issued an administrative subpoena demanding his personal information. Days later, federal agents showed up at his door.
Advertisement
Showed. Up. At. His. Door.
A retired guy sends a respectful email asking the government to be careful with someone’s life, and within the same day, the surveillance apparatus is mobilized against him.
The tool being weaponized here is the administrative subpoena (something we’ve been calling out for well over a decade, under administrations of both parties) which is a particularly insidious instrument because it doesn’t require a judge’s approval. Unlike a judicial subpoena, where investigators have to show a judge enough evidence to justify the search, administrative subpoenas are essentially self-signed permission slips. As TechCrunch explains:
Unlike judicial subpoenas, which are authorized by a judge after seeing enough evidence of a crime to authorize a search or seizure of someone’s things, administrative subpoenas are issued by federal agencies, allowing investigators to seek a wealth of information about individuals from tech and phone companies without a judge’s oversight.
While administrative subpoenas cannot be used to obtain the contents of aperson’s emails, online searches, or location data, they can demand information specifically about the user, such as what time a user logs in, from where, using which devices, and revealing the email addresses and other identifiable information about who opened an online account. But because administrative subpoenas are not backed by a judge’s authority or a court’s order, it’s largely up to a company whether to give over any data to the requesting government agency.
Advertisement
The Philadelphia retiree’s case would be alarming enough if it were a one-off. It’s not. Bloomberg has reported on at least five cases where DHS used administrative subpoenas to try to unmask anonymous Instagram accounts that were simply documenting ICE raids in their communities. One account, @montcowatch, was targeted simply for sharing resources about immigrant rights in Montgomery County, Pennsylvania. The justification? A claim that ICE agents were being “stalked” — for which there was no actual evidence.
The ACLU, which is now representing several of these targeted individuals, isn’t mincing words:
“It doesn’t take that much to make people look over their shoulder, to think twice before they speak again. That’s why these kinds of subpoenas and other actions—the visits—are so pernicious. You don’t have to lock somebody up to make them reticent to make their voice heard. It really doesn’t take much, because the power of the federal government is so overwhelming.”
This is textbook chilling effects on speech.
Remember, it was just a year and a half ago in Murthy v. Missouri, the Supreme Court found no First Amendment violation when the Biden administration sent emails to social media platforms—in part because the platforms felt entirely free to say no. The platforms weren’t coerced; they could ignore the requests and did.
Advertisement
Now consider the Philadelphia retiree. He sends one polite email. Within hours, DHS has mobilized to unmask him. Days later, federal agents are at his door. Does that sound like someone who’s free to speak his mind without consequence?
Even if you felt that what the Biden admin did was inappropriate, it didn’t involve federal agents showing up at people’s homes.
That is what actual government suppression of speech looks like. Not mean tweets from press secretaries that platforms ignored, but federal agents showing up at your door because you sent an (perfectly nice) email the government didn’t like.
So we have DHS mobilizing within hours to identify a 67-year-old retiree who sent a polite email. We have agents showing up at citizens’ homes to interrogate them about their protected speech. We have the government trying to unmask anonymous accounts that are documenting law enforcement activities — something that is unambiguously protected under the First Amendment.
Advertisement
Recording police, sharing that recording, and doing so anonymously is legal. It’s protected speech. And the government is using administrative subpoenas to try to identify and intimidate the people doing it.
For years, we heard that government officials sending emails to social media companies — emails the companies ignored — constituted an existential threat to the First Amendment. But when the government actually uses its coercive power to track down, identify, and intimidate citizens for their speech?
Crickets.
This is what a real threat to free speech looks like. Not “jawboning” that platforms can easily refuse, but the full weight of federal surveillance being deployed against anyone who dares to criticize the administration. The chilling effect here is the entire point.
Advertisement
As the ACLU noted, this appears to be “part of a broader strategy to intimidate people who document immigration activity or criticize government actions.”
If you spent the last few years warning about government censorship, this is your moment. This is the actual thing you claimed to be worried about. But, of course, all those who pretended to care about free speech really only meant they cared about their own team’s speech. Watching the government actually suppress critics? No big deal. They probably deserved it.
Elon Musk told podcast host Dwarkesh Patel and Stripe co-founder John Collison that space will become the most economically compelling location for AI data centers in less than 36 months, a prediction rooted not in some exotic technical breakthrough but in the basic math of electricity supply: chip output is growing exponentially, and electrical output outside China is essentially flat.
Solar panels in orbit generate roughly five times the power they do on the ground because there is no day-night cycle, no cloud cover, no atmospheric loss, and no atmosphere-related energy reduction. The system economics are even more favorable because space-based operations eliminate the need for batteries entirely, making the effective cost roughly 10 times cheaper than terrestrial solar, Musk said. The terrestrial bottleneck is already real.
Musk said powering 330,000 Nvidia GB300 chips — once you account for networking hardware, storage, peak cooling on the hottest day of the year, and reserve margin for generator servicing — requires roughly a gigawatt at the generation level. Gas turbines are sold out through 2030, and the limiting factor is the casting of turbine vanes and blades, a process handled by just three companies worldwide.
Five years from now, Musk predicted, SpaceX will launch and operate more AI compute annually than the cumulative total on Earth, expecting at least a few hundred gigawatts per year in space. Patel estimated that 100 gigawatts alone would require on the order of 10,000 Starship launches per year, a figure Musk affirmed. SpaceX is gearing up for 10,000 launches a year, Musk said, and possibly 20,000 to 30,000.