Looking for an inexpensive pair of earbuds to toss in your gym bag? You can snag our favorite budget wireless earbuds, the JLab Go Pop ANC, for a shockingly low $19 on Amazon, an $11 markdown from their usual price. Don’t let the cost fool you, these earbuds have surprised multiple WIRED writers with their clear sound, water resistance, and ANC performance.
These earbuds have all the features you’d expect from a pair five times the price. They sport IP55 water and dust resistance, perfect for a sweaty trip to the gym or a long run on the beach, and multipoint pairing in case you want to use them for a quick call on your laptop. The included app has an adjustable equalizer, something not even all expensive earbuds can claim, plus programmable controls in case you don’t like the default button layout. Battery life is even pretty decent for the category, with eight hours of juice in the buds and up to 32 hours total with the included charging case.
When you’ve got the tunes going, the JLab Go’s active noise-canceling is surprisingly effective, easily tuning out the hum of an HVAC system and other annoyances. Like other ANC-capable earbuds, they also come equipped with a transparency mode for letting in important sounds, and it works surprisingly well given how little these earbuds cost. You might want to consider something more serious for your next long-haul flight, but these work in a pinch for some yard work or a quick workout.
After 400 years underwater, a Swedish Navy vessel in the Baltic Sea off Stockholm has become visible. Sunk on purpose back in the 17th century, the ship has resurfaced after the waters reached their lowest level in the past 100 years. Marine Archeologist, Jim Hansson, from Stockholm’s Vrak Museum of Wrecks, explained the conditions which led to its reemergence to AFP, as reported by CBS. “There has been a really long period of high pressure here around our area in the Nordics. So the water from the Baltic has been pushed out to the North Sea and the Atlantic.” .
The unidentified ship was sunk around 1640 so it could be used to form the foundation of a bridge connecting to the Kastellholmen island. There are currently five sunken ships in the area. The Swedish Navy is looking into reusing their oak hulls rather than using new wood. Researchers are currently attempting to identify these sunken ships as part of a research program called “The Lost Navy.”
Advertisement
How did the shipwreck survive underwater for 400 years?
It might seem baffling that a wooden ship could survive in the ocean for 400 years, but the Baltic Sea had the right conditions to keep the Swedish Navy vessel largely intact. According to Hansson, that part of the ocean doesn’t have shipworms, meaning the sunken ship wasn’t eaten. Shipworms, which can grow up to two meters long, are sea creatures that use bacteria in their gut to break down wood and consume it. They’re so proficient at it that they can sink a boat.
Advertisement
Rather than rotting the wood away as you’d possibly expect, the water actually keeps the boat intact — especially at deep levels — creating a time capsule of sorts. In fact, most boats can remain undisturbed deep under the water indefinitely — but bringing the shipwreck to the surface can cause the wood to break down since it was only being held together by water between its cells.
This has been a big issue with recovering the Vasa, another vessel in Sweden that sank back in 1628. Its wood is being ruined by iron and metal pieces that have started to acidify, now it’s out of the water. Scientists discovered that earth alkaline hydroxides can neutralize the acid, stopping the chemical reaction that destroys the wood, but it’s still a challenge to preserve uncovered shipwrecks. This means the low water levels in the Baltic Sea could pose a problem for the newly uncovered warship.
Ali Farhadi speaks at the Tech Alliance State of Technology annual luncheon in Seattle, May 2024. (GeekWire File Photo / Todd Bishop)
Microsoft is hiring a group of top AI researchers from the Seattle-based Allen Institute for AI and the University of Washington, including former Ai2 CEO Ali Farhadi, GeekWire has learned.
Farhadi, Hanna Hajishirzi, and Ranjay Krishna are expected to join Mustafa Suleyman’s organization at Microsoft while retaining their faculty positions at the UW’s Allen School of Computer Science and Engineering. Also joining is Sophie Lebrecht, the former Ai2 chief operating officer.
The move follows Farhadi’s departure from Ai2, announced March 12. Farhadi had led the Seattle-based nonprofit research institute for more than two and a half years.
Suleyman, the CEO of Microsoft AI, narrowed his focus last week from overseeing consumer-oriented Copilot products to leading Microsoft’s Superintelligence team.
The hires come as Microsoft works to reduce its dependence on OpenAI for frontier AI models, competing against Amazon, Google, and others. Suleyman’s Superintelligence team, formed in November, is part of a broader push to further develop advanced foundation models.
Advertisement
Microsoft has already hired researchers from Google DeepMind, Meta, OpenAI, and Anthropic, and the addition of the Ai2 and UW group would bring deep expertise in open-source model development and training efficiency — where Ai2 has punched well above its weight.
Backing from NSF and Nvidia
The exits represent a notable collective loss for Ai2, which was founded in 2014 by the late Microsoft co-founder Paul Allen. Hajishirzi is a co-lead of the OLMo open-source language model project and a co-principal investigator on a $152 million, five-year initiative backed by the National Science Foundation and Nvidia to build open AI models for scientific research.
She represented Ai2 in multiple sessions last week at Nvidia’s GTC conference in San Jose, including a panel on the future of open models alongside Nvidia CEO Jensen Huang.
Krishna has led the development of Ai2’s Molmo multimodal models, among other projects. He also presented at the Nvidia conference last week on behalf of the institute.
Advertisement
Farhadi, a computer vision specialist, co-founded Ai2 spinout Xnor.ai, which Apple acquired in 2020 for an estimated $200 million. He led machine learning efforts at Apple before returning to lead Ai2 as CEO in July 2023.
Ai2 interim CEO Peter Clark acknowledged the departures in a statement, saying the institute remains committed to its mission and its partnerships with the NSF and Nvidia, including the OMAI initiative.
“These initiatives are backed by a broad, experienced team with the expertise and continuity needed to carry this work forward,” Clark said. “We’re confident in our ability to build on the strong foundation already in place and to expand the impact of these efforts in the months ahead.”
He added that the institute is “grateful for the leadership and contributions of Ali, Hanna, Ranjay, and others” in advancing Ai2’s work, and wished them well.
Advertisement
In a post about the hires on LinkedIn, Suleyman praised Farhadi for leading Ai2 in releasing more than 100 models in a single year and called Hajishirzi “one of the most cited researchers of natural language processing in the world, full stop.”
Suleyman described Lebrecht as having scaled Ai2’s operations and open-source efforts, noting that she also co-founded the AI company Neon Labs and holds a PhD in cognitive neuroscience from Brown University.
He said they will help pursue Microsoft’s mission of “humanist superintelligence: safer, controllable, more capable AI systems in service of humanity and our toughest problems.”
When news broke earlier this month that Farhadi was leaving, Ai2 board chair Bill Hilf told GeekWire that Farhadi wanted to pursue research at the extreme frontier of AI, where for-profit companies are spending billions on training the most advanced models.
Advertisement
At the time, Hilf said the board had to weigh whether a nonprofit’s philanthropic dollars were best spent trying to keep pace, acknowledging that competing against tech giants at the largest scale of model development had become extraordinarily difficult.
Changes in Ai2’s funding realities
Behind the scenes, the changing nature of Ai2’s funding environment has also been playing a role in the exits, according to people with knowledge of the situation.
Ai2 was originally funded by Allen’s Vulcan Inc. and later by his estate. Its primary backer is now the Fund for Science and Technology, a $3.1 billion foundation created under Allen’s instructions and publicly launched in August, with a focus on applying science and technology to problems in areas aligned with Allen’s passions, including AI, bioscience, and the environment.
FFST, led by CEO Dr. Lynda Stuart, a physician-scientist who previously led the Institute for Protein Design at the UW, favors applied uses of AI over the costly work of frontier models.
Advertisement
In addition, while all Ai2 programs for 2026 are fully funded, these people said, FFST is moving from providing Ai2 with overall annual funding to a proposal-based process, with future support expected to favor real-world applications of AI over building open-source foundation models. The shift helps explain the departures of researchers focused on model development.
A spokesperson for the Fund for Science and Technology said Ai2’s “work and mission remain the same” and that FFST’s broader program strategies are still under development.
Farhadi, Hajishirzi, and Krishna are researchers whose work centers on building and advancing AI models. Microsoft’s Superintelligence team, backed by billions in compute investment, offers the resources and mandate to pursue that work at a much larger scale.
Supercapacitors rely mostly on double-layer capacitance to bridge the divide between chemical batteries and traditional capacitors, but they come with a number of weaknesses. Paramount among these are their relatively low voltage of around 2.7 V before their electrolyte begins to decompose, as well as their relatively high rates of self-discharge. Here a new design using lignin-derived porous carbon electrodes and a fluorinated diluent was demonstrated by [Shichao Zhang] et al., as published in Carbon Research, that seems to address these issues.
Most notable are the relatively high voltage of 4 V, an energy density of 77 Wh/kg and a self-discharge rate that’s much slower than that of conventional supercapacitors. In comparison with these supercapacitors, these demonstrated versions are also superior in terms of recharge cycles with 90% of capacity remaining after 10,000 cycles, which together with their much higher energy density should prove to be quite useful.
This feat is accomplished by using lignin as the base for the carbon electrodes to make a highly porous surface, along with the new electrolyte formulation consisting of alithium salt (LiBF4) dissolved in sulfolane with TTE as a non-solvating diluent. The idea of using lignin-derived carbon for such a purpose has previously been pitched by [Jia Liu] et al. in 2022 and [Zhihao Ding] in 2025, with this seemingly one of the first major applications we may be seeing.
Advertisement
Although the path towards commercialization from a lab-assembled prototype is a rough one, we may be seeing some of these improvements come to supercapacitors near you sooner rather than later.
ChatGPT can now help you find and book a rental car thanks to a new Turo integration that launched Monday. The Turo app for ChatGPT allows you to just tell the AI chatbot what you’re looking for — from pickup location and dates to number of seats, EV preference and more — using natural language, and be presented with real Turo rental cars, advice and links directly to the Turo website to book.
Turo is a peer-to-peer marketplace that lets private owners rent out their personal vehicles to travelers and locals. I like to think of it as the “Airbnb of cars” or drive-it-yourself Uber. Unlike traditional rental agencies that own and maintain large fleets of cars, Turo merely provides the tech, insurance and support to connect vehicle hosts with guest drivers. Turo has proven to be a popular alternative to the airport rental counter thanks to its more varied selection of unique car models (including luxury or high-tech vehicles), competitive pricing, and the convenience of having certain vehicles delivered.
And now, it has a ChatGPT integration. You can access the new Turo app within ChatGPT by first searching for and then adding Turo to the list of available agents in ChatGPT’s Apps menu. Once connected, adding “@Turo” to any chat with the AI bot will trigger the new functionality.
Advertisement
I fired up ChatGPT after setting it up for myself and typed the prompt: “@Turo, I’m going to be landing in Atlanta on Friday and would like to rent an EV for the weekend with enough range to make it to Augusta. What’s available?”
I used natural language to find available cars on the Turo service using ChatGPT.
Screenshot by Antuan Goodwin/CNET
The app replied with listings of vehicles currently available to rent near the airport with enough range to make the approximately 300-mile round trip with as little as one quick top-up. Each listing featured photos, price estimates (including tax and fees), star ratings and the number of times each car had been rented.
Advertisement
Clicking on a listing took me straight to the Turo website (or app on mobile) to complete the booking. I also tried asking for “an EV near my home that seats six people” and “a hybrid that would be useful for moving,” and found the results to be adequate.
In addition to the listings, ChatGPT and Turo provided details (like range) about each car as well as pros and cons, such as Tesla’s plentiful Superchargers between Atlanta and Augusta or the Kia EV6’s very fast charging speed. Overall, the new functionality looks like a fairly convenient and decent starting point for someone who knows nothing about cars to choose a rental.
Turo’s app for ChatGPT is the latest example of AI’s rapid advance into every aspect of the automotive industry, from natural language AI assistants in the dashboard to AI-powered inspection of rental car returns.
(Disclosure: Ziff Davis, CNET’s parent company, filed a lawsuit against OpenAI last year, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
Speaking to Digital Foundry about Project Amethyst, Mark Cerny confirmed that Sony is developing AI-powered frame generation for PlayStation, but noted that no new releases are planned this year. He also declined to reveal whether the feature will be rolled out to the PS5 and PS5 Pro or be exclusive… Read Entire Article Source link
OpenAI is rolling out a new feature called ‘Library’ for ChatGPT, which allows you to store your personal files or images on OpenAI’s cloud storage.
OpenAI says ChatGPT Library requires Plus, Pro, and Business. It’s rolling out to customers across the world except the European Economic Area, Switzerland, and the United Kingdom.
I refreshed the ChatGPT web, and the Library automatically showed up on the sidebar.
ChatGPT Library
To my surprise, it’s actually not empty, as ChatGPT has already saved some of the files I uploaded in the last two weeks.
Turns out it’s an expected behaviour. By default, GPT will save your uploaded files in a dedicated, secure location, and they can be used for reference in a future chat.
Advertisement
“ChatGPT automatically saves uploaded and created files, including files uploaded in chats (for example: documents, spreadsheets, presentations, and images) in a dedicated, secure location so they can be easily accessed later,” OpenAI noted in a document.
On the other hand, if you use ChatGPT to generate AI images, they will continue to appear in the Images tab.
The Library section only has files you uploaded, and you can upload files by following these steps:
Open the composer menu (the attachment/add button).
Select Add from library.
Choose the file you want to use.
OpenAI says ChatGPT automatically saves uploaded and created files, including files uploaded in chats, in a dedicated, secure location so they can be easily accessed later.
This means files are saved to your account until you delete them manually, but deleting a chat containing a file does not delete those files saved to Library.
Advertisement
To delete a file:
Select the file in the “Library” tab
Click Delete, or click the trash icon next to the file.
OpenAI will remove files from its servers within 30 days of deletion.
It’s unclear why it takes nearly a month to purge files, but it is likely due to legal reasons.
Malware is getting smarter. The Red Report 2026 reveals how new threats use math to detect sandboxes and hide in plain sight.
Download our analysis of 1.1 million malicious samples to uncover the top 10 techniques and see if your security stack is blinded.
Slowly but surely, the ante is starting to be upped in The Pitt season 2. Last week, a woman detained by two male ICE agents was brought in to see Dr. Robbie (Noah Wyle) and Cassie (Fiona Dourif).
Blood covered her arms as her handcuffs cut into her wrists, with the distressed woman clearly too scared to say anything. When Cassie asks her if there’s anybody she’d like to call, one ICE agent immediately responds that there are “no calls allowed.”
Clearly, we’re only just beginning to unravel this. So when does The Pitt season 2 episode 12 arrive on HBO Max?
Advertisement
Article continues below
Advertisement
What time can I watch The Pitt season 2 episode 12 on HBO Max?
The Pitt Season 2 | Official Trailer | HBO Max – YouTube
For US viewers, The Pitt season 2 episode 12 will drop on Thursday, March 26 at 6pm PT/ 9pm ET. As always, it’ll come out on HBO Max, too.
Internationally, you’re looking out for these timings:
US – 6pm PT / 9pm ET
Canada – 6pm PT / 9pm ET
India – Friday, March 20 at 7:30am IST
Singapore – Friday, March 20 at 10am SGT
Australia – Friday, March 20 at 1pm AEDT
New Zealand – Friday, March 20 at 3pm NZDT
You’ll notice that I’ve not included the UK here. That’s because March 26 is the same day that HBO Max UK actually launches.
Currently, we know that The Pitt season 1 will be available, but there’s been no confirmation about season 2.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
Advertisement
When do new episodes of The Pitt season 2 come out?
Whitaker just out there doing his best. (Image credit: HBO)
New episodes of The Pitt will make landfall every Thursday in the US and on Fridays everywhere else. Here are the all-important dates you need to know about:
And of course, you can also follow TechRadar on YouTube and TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
The Prague and Krakow firm, whose earliest bets include UiPath and ElevenLabs, is doubling down on pre-seed in Central and Eastern Europe and its global diaspora, with a six-partner team and a $1–5M typical cheque.
Credo Ventures has closed Credo Stage 5, an $88 million fund raised in a single closing, continuing the Prague and Krakow firm’s fifteen-year strategy of writing the first institutional cheque for founders from Central and Eastern Europe and its diaspora.
The fund is the firm’s largest to date, stepping up from the €75 million fourth fund closed in 2022.
The firm’s founding partners Ondrej Bartos and Jan Habermann launched Credo in 2010 and have backed over 100 companies across four funds.
Advertisement
The 💜 of EU tech
The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!
The two headline outcomes, UiPath, the Romanian-founded RPA platform that listed on the NYSE in 2021 at a $35 billion valuation, and ElevenLabs, the AI voice company most recently valued at $11 billion, are the cases Credo leads with, and with good reason: both were pre-seed investments led or co-led by the firm before either company was widely known. Maciek Gnutek, now a partner at Credo, was an early backer of ElevenLabs.
The fifth fund is managed by six partners. Alongside Bartos and Habermann, the team includes Gnutek, who focuses on the Polish market and diaspora connections; Jakub Krikava, whose background spans public policy and the Czech defence ministry; Max Kolowrat-Krakowsky, with international investment experience and US networks; and Matej Micek, focused on infrastructure, AI, and developer tools.
Advertisement
The multi-generational structure is deliberate. Credo’s stated argument is that the new generation of GPs reinforces the firm’s positioning at a moment when the CEE ecosystem is maturing but its pre-seed layer remains structurally underserved.
The firm’s thesis for Fund 5 is a sharpened version of what it has always done. Typical cheques will fall in the $1–5 million range, though Krikava told start-up.ro the firm remains flexible.
Sectoral focus is deliberately loose, Credo describes itself as founder-first rather than theme-driven, but the team is specifically attuned to technical founders with global ambitions, and has a growing eye on AI companies after the pace of growth in its fourth fund portfolio.
The CEE region it covers has a combined population of around 170 million and GDP of roughly $2 trillion, and Credo argues that the diaspora element, particularly founders from the region building in San Francisco and London, is an equally important sourcing channel.
Advertisement
The competitive framing is quietly pointed. The firm notes that fragmentation and cultural divergence across CEE countries remain meaningful barriers for outside investors, creating a structural advantage for a firm that has built networks across the region for fifteen years.
Credo-backed companies have attracted follow-on from Sequoia, Andreessen Horowitz, Accel, and Index Ventures, which the firm cites as validation of the quality of its early-stage sourcing. Around two-thirds of the capital in the fund comes from institutional investors, with no public funding involved.
The fund size increase, from €75 million to $88 million, is modest rather than dramatic, which suggests Credo is not betting on a step-change in the regional output but on the compounding value of being consistently the first name a breakout founder from the region calls.
This was extremely wild shit to be happening anywhere, much less in the land of the First Amendment. No sooner had Donald Trump decided it was time to rename the Department of Defense to the Department of War than the head of DoD operations decided it would be sorting news agencies by level of subservience.
Pretending this was all about national security, the Defense Department basically kicked everyone out of the Pentagon’s press office and stated that only those that chose to play by the new rules would be allowed back inside.
Booted: NBC News, the New York Times, NPR. Welcomed back into the fold: OAN, Newsmax, Breitbart. The Pentagon wanted a state-run press, but without having to do all the heavy lifting that comes with instituting a state-run press in the Land of the Free.
Somewhat surprisingly, some of those explicitly invited to partake of the new Defense Department media wing refused to participate. Fox and Newsmax decided to stay out, rather than promise they’d never publish leaked documents. Those choosing to bend the knee were those who never needed this sort of coercion in the first place: One America News (OAN), The Federalist, and far-right weirdos, the Epoch Times. In other words, MAGA-heavy breathers that have never been known for their independence, much less their journalism.
Advertisement
That didn’t stop Hegseth and the department he’s mismanaging from attempting to take a victory lap. And it certainly didn’t stop news agencies like the New York Times from suing over this blatant violation of the First Amendment.
It’s so obvious it only took the NYT four months to secure a win in a federal court (DC) that is positively swamped with litigation generated by Trump’s swamp. (h/t Adam Klasfield)
The decision [PDF] makes it clear in the opening paragraph how this is going to go for the administration and its extremely selective “respect” of enshrined rights and freedoms.
A primary purpose of the First Amendment is to enable the press to publish what it will and the public to read what it chooses, free of any official proscription. Those who drafted the First Amendment believed that the nation’s security requires a free press and an informed people and that such security is endangered by governmental suppression of political speech. That principle has preserved the nation’s security for almost 250 years. It must not be abandoned now.
Amen.
Advertisement
The court notes that in the past, there has been some friction between national security concerns and reporting by journalists. In some cases, the friction has been little more than the government chafing a bit when something has been published that it would rather have kept a secret. In other cases, leaks involving sensitive information have provoked reform efforts on both sides of the equation, seeking to balance these concerns with serving the public interest.
Up until now, any efforts to expel reporters have been limited to backroom bitching. What’s happening now, however, is unprecedented.
Historically, though, even when Department leaders disliked a journalist’s reporting, they did not consider suspending, revoking, or not renewing the journalist’s press credentials in response to that reporting. Julian Barnes, Pete Williams, and Robert Burns—reporters who have spent decades covering the Pentagon—as well as former Pentagon officials, are not aware of the Department ever suspending, revoking, or not renewing a journalist’s credentials due to concern over the safety or security of Department personnel or property or based on the content of their reporting.
This may be new, but the court isn’t willing to make it the “new normal.” It’s the decades of precedent that truly matter, not the vindictive whims of the overgrown toddlers currently holding office.
The Pentagon claims that demanding journalists agree not to “solicit,” much less print data or information not explicitly approved for release by the Defense Department doesn’t reach any further than existing laws governing the handling of classified documents. The court disagrees, noting that the new policy allows the government to conflate the illegal solicitation of classified material with the sort of soliciting — i.e., requests for information, etc. — journalists do every day in hopes of securing something newsworthy.
Advertisement
On top of allowing the government to punish people for things that weren’t previously considered unlawful, the demand for obeisance wasn’t created in a vacuum. Instead, it flowed directly from this entire administration’s constant attacks on the press by the president and pretty much every one in his Cabinet.
The plaintiffs are correct: “The record is replete with undisputed evidence that the Policy is viewpoint discriminatory.” That evidence tells the story of a Department whose leadership has been and continues to be openly hostile to the “mainstream media” whose reporting it views as unfavorable, but receptive to outlets that have expressed “support for the Trump administration in the past.”
The story begins prior to the adoption of the Policy, when—following extensive reporting on Secretary Hegseth’s background and qualifications during his confirmation process—Secretary Hegseth and Department officials “openly complained about reporting they perceive[d] as unfavorable to them and the Department.” Then, in the weeks and months leading up to the issuance of the Policy, Department officials repeatedly condemned certain news organizations—including The Times—for their coverage of the Department. For example, in response to reporting by The Times on Secretary Hegseth’s alleged misuse of the messaging platform Signal, Mr. Parnell posted on X to call out The Times “and all other Fake News that repeat their garbage.” Mr. Parnell decried these news organizations as “Trump-hating media” who “continue[] to be obsessed with destroying anyone committed to President Trump’s agenda.” In other social media posts leading up to the issuance of the Policy, Department officials referred to journalists from The Washington Post as “scum” and called for their “severe punishment” in response to reporting on Secretary Hegseth’s security detail.
It was never about keeping loose lips from sinking ships. It was always about cutting off access to news agencies the administration didn’t like. And once you’ve gotten rid of the critics, you’re left with the functional equivalent of a state-run media, but without the nastiness of having to disappear people into concentration camps or usher them out of their cubicles at gunpoint.
The court won’t let this stand. The new policy violates both the First Amendment and Fifth Amendment (due to the vagueness of its ban on “soliciting” sensitive information). That’s never been acceptable before in this nation. Just because there’s an aspiring tyrant leaning heavily on the Resolute Desk these days doesn’t make it any more permissible.
Advertisement
The Court recognizes that national security must be protected, the security of our troops must be protected, and war plans must be protected. But especially in light of the country’s recent incursion into Venezuela and its ongoing war with Iran, it is more important than ever that the public have access to information from a variety of perspectives about what its government is doing—so that the public can support government policies, if it wants to support them; protest, if it wants to protest; and decide based on full, complete, and open information who they are going to vote for in the next election. As Justice Brandeis correctly observed, “sunlight is the most powerful of all disinfectants.”
The administration will definitely appeal this decision. And it almost definitely will try to bypass the DC Appeals Court and go straight to the Supreme Court by claiming not being able to expel reporters it doesn’t like is some sort of national emergency. It will probably even claim that the fight it picked in Iran justifies the actions it took months before it decided to involve us in the nation’s latest Afghanistan/Vietnam.
But it definitely shouldn’t win. This isn’t some obscure permutation of First Amendment law. This is the government crafting a policy that allows it to decide what gets to be printed and who gets to print it. That’s never been acceptable here. And it never should be.
However, Congress clearly did not continue this work. In fact, it now appears that Congress is poised to consider another extension of this program without even attempting to include necessary and common sense reforms. Most notably, Congress is not considering a requirement to obtain a warrant before looking at data on U.S. persons that was indiscriminately and warrantlessly collected. House Speaker Mike Johnson confirmed that “the plan is to move a clean extension of FISA … for at least 18 months.”
Even more disappointing, House Judiciary Chair Jim Jordan, who has previously been a champion of both the warrant requirement and closing the data broker loophole, told the press he would vote for a clean extension of FISA, claiming that RISAA included enough reforms for the moment.
It’s important to note RISAA was just a reauthorization of this mass surveillance program with a long history of abuse. Prior to the 2024 reauthorization, Section 702 was already misused to run improper queries on peaceful protesters, federal and state lawmakers, Congressional staff, thousands of campaign donors, journalists, and a judge reporting civil rights violations by local police. RISAA further expanded the government’s authority by allowing it to compel a much larger group of people and providers into assisting with this surveillance. As we said when it passed, overall, RISAA is a travesty for Americans who deserve basic constitutional rights and privacy whether they are communicating with people and services inside or outside of the US.
Advertisement
Section 702 should not be reauthorized without any additional safeguards or oversight. Fortunately, there are currently three reform bills for Congress to consider: SAFE, PLEWSA, and GSRA. While none of these bills are perfect, they are all significantly better than the status quo, and should be considered instead of a bill that attempts no reform at all.
Mass spying—accessing a massive amount of communications by and with Americans first and sorting out targets second and secretly—has always been a problem for our rights. It was a problem at first when President George W. Bush authorized it in secret without Congressional or court oversight. And it remained a problem even after the passage of Section 702 in 2008 created the possibility of some oversight. Congress was right that this surveillance is dangerous, and that’s why it set Section 702 up for regular reconsideration. That reconsideration has not occurred, even as the circumstances of the NSA, Justice Department, and FBI leadership, have radically changed. Reform is long overdue, and now it’s urgent.
You must be logged in to post a comment Login