State-backed hackers are using Google’s Gemini AI model to support all stages of an attack, from reconnaissance to post-compromise actions.
Bad actors from China (APT31, Temp.HEX), Iran (APT42), North Korea (UNC2970), and Russia used Gemini for target profiling and open-source intelligence, generating phishing lures, translating text, coding, vulnerability testing, and troubleshooting.
Cybercriminals are also showing increased interest in AI tools and services that could help in illegal activities, such as social engineering ClickFix campaigns.
AI-enhanced malicious activity
The Google Threat Intelligence Group (GTIG) notes in a report today that APT adversaries use Gemini to support their campaigns “from reconnaissance and phishing lure creation to command and control (C2) development and data exfiltration.”
Chinese threat actors employed an expert cybersecurity persona to request that Gemini automate vulnerability analysis and provide targeted testing plans in the context of a fabricated scenario.
Advertisement
“The PRC-based threat actor fabricated a scenario, in one case trialing Hexstrike MCP tooling, and directing the model to analyze Remote Code Execution (RCE), WAF bypass techniques, and SQL injection test results against specific US-based targets,” Google says.
Another China-based actor frequently employed Gemini to fix their code, carry out research, and provide advice on technical capabilities for intrusions.
The Iranian adversary APT42 leveraged Google’s LLM for social engineering campaigns, as a development platform to speed up the creation of tailored malicious tools (debugging, code generation, and researching exploitation techniques).
Additional threat actor abuse was observed for implementing new capabilities into existing malware families, including the CoinBait phishing kit and the HonestCue malware downloader and launcher.
Advertisement
GTIG notes that no major breakthroughs have occurred in that respect, though the tech giant expects malware operators to continue to integrate AI capabilities into their toolsets.
HonestCue is a proof-of-concept malware framework observed in late 2025 that uses the Gemini API to generate C# code for second-stage malware, then compiles and executes the payloads in memory.
HonestCue operational overview Source: Google
CoinBait is a React SPA-wrapped phishing kit masquerading as a cryptocurrency exchange for credential harvesting. It contains artifacts indicating that its development was advanced using AI code generation tools.
One indicator of LLM use is logging messages in the malware source code that were prefixed with “Analytics:,” which could help defenders track data exfiltration processes.
Based on the malware samples, GTIG researchers believe that the malware was created using the Lovable AI platform, as the developer used the Lovable Supabase client and lovable.app.
Advertisement
Cybercriminals also used generative AI services in ClickFix campaigns, delivering the AMOS info-stealing malware for macOS. Users were lured to execute malicious commands through malicious ads listed in search results for queries on troubleshooting specific issues.
AI-powered ClickFix attack source: Google
The report further notes that Gemini has faced AI model extraction and distillation attempts, with organizations leveraging authorized API access to methodically query the system and reproduce its decision-making processes to replicate its functionality.
Although the problem is not a direct threat to users of these models or their data, it constitutes a significant commercial, competitive, and intellectual property problem for the creators of these models.
Essentially, actors take information obtained from one model and transfer the information to another using a machine learning technique called “knowledge distillation,” which is used to train fresh models from more advanced ones.
“Model extraction and subsequent knowledge distillation enable an attacker to accelerate AI model development quickly and at a significantly lower cost,” GTIG researchers say.
Advertisement
Google flags these attacks as a threat because they constitute intellectual theft, they are scalable, and severely undermine the business model of AI-as-a-service, which has the potential to impact end users soon.
In a large-scale attack of this kind, Gemini AI was targeted by 100,000 prompts that posed a series of questions aimed at replicating the model’s reasoning across a range of tasks in non-English languages.
Google has disabled accounts and infrastructure tied to documented abuse, and has implemented targeted defenses in Gemini’s classifiers to make abuse harder.
The company assures that it “designs AI systems with robust security measures and strong safety guardrails” and regularly tests the models to improve their security and safety.
Advertisement
Modern IT infrastructure moves faster than manual workflows can handle.
In this new Tines guide, learn how your team can reduce hidden manual delays, improve reliability through automated response, and build and scale intelligent workflows on top of tools you already use.
When Intuit shipped AI agents to 3 million customers, 85% came back. The reason, according to the company’s EVP and GM: combining AI with human expertise turned out to matter more than anyone expected — not less.
Marianna Tessel, the financial software company’s EVP and GM, calls this AI-HI combination a “massive ask” from its customers, noting that it provides another level of confidence and trust.
“One of the things we learned that has been fascinating is really the combination of human intelligence and artificial intelligence,” Tessel said in a new VB Beyond the Pilot podcast. “Sometimes it’s the combination of AI and HI that gives you better results.”
Advertisement
Chatbots alone aren’t the answer
Intuit — the parent company of QuickBooks, TurboTax, MailChimp and other widely-used financial products — was one of the first major enterprises to go all in on generative AI with its GenOS platform last June (long before fears of the “SaaSpocalypse” had SaaS companies scrambling to rethink their strategies).
Quickly, though, the company recognized that chatbots alone weren’t the answer in enterprise environments, and pivoted to what it now calls Intuit Intelligence. The dashboard-like platform features specialized AI agents for sales, tax, payroll, accounting and project management that users can interact with using natural language to gain insights on their data, automate tasks, and generate reports.
Customers report invoices are being paid 90% in full and five days faster, and that manual work has been reduced by 30%. AI agents help close books, categorize transactions, run payroll, automate invoice reminders and surface discrepancies.
For instance, one Intuit customer uncovered fraud after interacting with AI agents and asking questions about amounts that didn’t add up. “In the beginning it was like, ‘Is that an error? And as he dug in, he discovered very significant fraud,” Tessel said.
Advertisement
Why humans are still in the loop
Still, Intuit operates on the principle that humans are “always accessible,” Tessel said. Platforms are built in a way that users can ask questions of a human expert when they’re not getting what they need from the AI agent, or want a human to bounce ideas off of.
“I’m not talking about product experts,” Tessel said. “I’m talking about an actual accounting expert or tax expert or payroll expert.”
The platform has also been built to suggest human involvement in “high stakes” decision-making scenarios. AI goes to a certain level, then human experts review and categorize the rest. This provides a level of confidence, according to Tessel.
“We actually believe it becomes more needed and more powerful at the right moments,” she said. “The expert still provides things that are unique.”
Advertisement
The next step is giving customers the tools to perform next-gen tasks like vibe coding — but with simple architectures to reduce the burden for customers. “What we’re testing is this idea of, you can actually do coding without realizing that that’s what you are doing,” Tessel said.
For example, a merchant running a flower shop wants to ensure that they have the right amount of inventory in stock for Mother’s Day. They can vibe code an agent that analyzes previous years’ sales and creates purchase orders where stock is low. That agent could then be instructed to automatically perform that task for future Mother’s Days and other big holidays.
Some users will be more sophisticated and want the ability to dive deeper into the technology. “But some just want to express what they want to have happen,” Tessel said. “Because all they want to do is run their business.”
Listen to the full podcast to hear about:
Advertisement
Why first-party data can create a “moat” for SaaS companies.
Why showing AI’s logic matters more than a polished interface.
Why 600,000 data points per customer changes what AI can tell you about your business.
My life has changed so much since my time as a Voices of Change fellow during the 2023 school year. As I wrote in my final essay of the fellowship, the beautiful, imperfect school I loved and helped build had closed. With the support of my fellowship editor, Cobretti Williams, I applied and was admitted to the Creative Writing Workshop at the University of New Orleans, where I am taking graduate classes and teaching a freshman English composition course.
In deciding what to write as a reflection on my time since the fellowship, I started three different essays and hated all of them. I did a lot of cursing, went on a couple of brooding walks and wondered why I agreed to write this in the first place. During the similarly maddening process of designing the syllabus for the first college course I taught, I took a break to write my students a letter. Here is an excerpt:
Before we start this course together, it’s important for me to name something foundational to how I approach teaching it: Writing is hard for everyone. I love writing and I believe that, if I keep practicing, I can become great at it… and I still hate doing it a lot of the time. This is why writing is so important. Almost everything we want is on the other side of making ourselves do things we don’t want to do. When we sit down to write, whether we want to or not, and we keep writing when we hit that initial point where we want to stop, and continue when those moments arise again and again like waves, we are getting vital practice. This skill, ignoring the complacent you, the you that would rather do the thing tomorrow, or tomorrow’s tomorrow, and doing the thing now instead is an act of becoming the you that has the things you want. Like anything else, this becomes easier the more you do it.
This excerpt reminds me that writing is much more difficult than most of the things we do in a world that commodifies ease and comfort, upholds them as desirable and makes us feel we are entitled to them while simultaneously less and less able to tolerate their lack.
There is a common misconception that my students come to me with that manifests most often in the statement “I don’t know what to write.” They think this means they are not ready to begin, because they believe that writing is putting what you already know onto paper. I understand why this misconception exists. So often in life, we only see finished products. The published novel, the final cut, the social media post depicting the outcome and not the process and the struggle. It’s easy to think that everyone else has things figured out, that what you see is how something was from the beginning. This can trick us into believing that if something isn’t good right away, we should abandon it. Drafting insists that we try before we feel sure, finish something even if it is not yet “good.” Revision insists that what we have can be something different, something better, and teaches us to hold multiple things in our heads at the same time. Throughout this process, we gain clarity.
Advertisement
Each time we give or receive feedback and assess whether it moves us closer to or further from our vision, we get better at articulating what we want and closer to achieving it. When teachers and students do this work together and commit to improvement, even when we both have moments of uncertainty about what to do next, we are practicing true collaboration. We both grow. What a way to become more skillful at building the world we want.
It is a strange time to be devoting so much of my life to writing, to be telling students that they should care about writing too. Just this week, an article came out detailing pervasive, undisclosed AI use to grade and give feedback to student writing in some New Orleans schools. A study conducted in May of 2025 showed that 84 percent of high school students used generative AI to complete their school work. I understand intimately the overwhelm of educators and students, and the temporary relief that cognitive offloading with AI can provide.
However, what we lose in the long term by not engaging deeply in the writing process, the practice of giving and receiving feedback, of watching revision unfold, is so much greater than the gains we feel in accepting AI’s “help” in our moments of overwhelm. What world are we building when we delegate the human work of communication through writing to machines? We would do better to engage in a process of re-evaluating our priorities, taking on fewer assignments for longer and working collaboratively as educators and administrators to redesign curricula and systems so that teachers have the capacity to get to know their students through repeated contact with their written work.
Sometimes, it feels like we are already living in a completely different world from the one in which I grew up and was educated. Luckily, these times, despite how often folks like to say they are not, are precedented. In these times, I have been turning to Black women writers like Toni Morrison, Toni Cade Bambara, Audre Lorde and June Jordan for guidance, and they all insist writing only becomes more urgent the more dire the times. In facing what Toni Morrison described in 2004 as “a burgeoning ménage a trois of political interests, corporate interests and military interests” working to “literally annihilate an inhabitable, humane future,” I have been especially steeled by Audre Lorde’s words, “In this way alone we can survive, by taking part in a process of life that is creative and continuing, that is growth.”
Advertisement
In the face of a world that would automate us right out of existence, I intend for us to survive, and so I insist we write.
Let’s be honest, the modern web is… a mess. Pop-ups, autoplay videos, cookie banners, ads everywhere. In fact, sometimes it feels like actually reading something online is the hardest part. And that’s exactly where Textise comes in.
Textise
Think of it as a “strip everything away” button for the internet. Textise is a simple web tool that converts any webpage into a clean, text-only version, removing ads, images, scripts, and all the extra clutter. What you’re left with is just the content: no distractions, no loading bloat, no nonsense. It’s fast, lightweight, and honestly feels like going back to a simpler version of the web.
Why does it feel so refreshing?
Modern websites are built for engagement, not readability. That means heavy layouts, tracking scripts, and design choices that often get in the way of just consuming information. Tools like Textise flip that on its head by streamlining content into plain text, making it easier to read and more accessible. In fact, for long articles or research-heavy pieces, it can genuinely feel like a productivity boost: less scrolling, fewer distractions, and quicker load times.
Textise
You can even tweak how Textise looks and behaves, from fonts and text size to background colors and link styles. It’s a surprisingly flexible tool, letting you tailor the reading experience exactly the way you like it. Of course, you lose things along the way. Images, videos, interactive elements — all gone. But that’s also what makes it work. Textise isn’t trying to enhance the web; it’s trying to simplify it to the bare minimum. And weirdly enough, that’s exactly why it feels so useful in 2026.
So… who is this for? Well, pretty much anyone who reads a lot online. Whether it’s articles, blogs, or even cluttered news pages, Textise makes everything feel cleaner and easier to digest. It’s especially handy for people who just want to focus on content without distractions.
Same idea, very different vibe
If this all sounds familiar, that’s because most modern browsers already have a built-in Reader Mode, like the one in Safari or Chrome. These features clean up a webpage by removing ads, menus, and distractions, and reformat the article into a more readable layout with better fonts and spacing.
Advertisement
Textise (left) vs Chrome Reader Mode (right)Varun Mirchandani / Digital Trends
But here’s where Textise feels different. Reader modes are still design-aware, meaning they keep images and basic formatting and rely on the browser to figure out what the “main article” is. Textise, on the other hand, goes full savage mode. It strips everything down to raw text, no images, no styling, no fluff. In a way, Reader Mode is like switching to a clean reading theme… while Textise is like opening the internet in Notepad. And honestly, depending on the day (or how chaotic the webpage is), both have their moment.
And maybe that’s the best part about it. In a web that’s constantly trying to grab attention, Textise just quietly steps back and lets you focus. Sometimes, all it takes to make the internet better… is less internet.
Batteries are notoriously difficult pieces of technology to deal with reliably. They often need specific temperatures, charge rates, can’t tolerate physical shocks or damage, and can fail catastrophically if all of their finicky needs aren’t met. And, adding insult to injury, for many chemistries, the voltage does not correlate to state of charge in meaningful ways. Battery testers take many efforts to mitigate these challenges, but often miss the mark for those who need high fidelity in their measurements. For that reason, [LiamTronix] built their own.
The main problem with the cheaper battery testers, at least for [LiamTronix]’s use cases, is that he has plenty of batteries that are too large to practically test on the low-current devices, or which have internal battery management systems (BMS) which can’t connect to these testers. The first circuit he built to help solve these issues is based on a shunt resistor, which lets a smaller IC chip monitor a much larger current by looking at voltage drop across a resistor with a small resistance value. The Pi uses a Python script which monitors the current draw over the course of the test and outputs the result on a handy graph.
This circuit worked well enough for smaller batteries, but for his larger batteries like the 72V one he built for his electric tractor, these methods could draw far too much power to be safe. So from there he built a much more robust circuit which uses four MOSFETs as part of four constant current sources to sink and measure the current from the battery. A Pi Zero monitors the voltage and current from the battery, and also turns on some fans pointed at the MOSFETs’ heat sink to keep them from overheating. The system can be configured to work for different batteries and different current draw rates, making it much more capable than anything off the shelf.
AI bias is usually talked about in terms of algorithms: skewed datasets, flawed outputs, and stereotypes baked into models. But new research suggests there’s another, more subtle problem about who gets to use AI in the first place. According to a recent report by Lean In, women are less likely than men to use AI tools at work, and even when they do, they’re less likely to get recognition or support for it.
Elina Fairytale / Pexels
The numbers paint a clear picture. Men are more likely to use AI regularly (33% vs 27%), more likely to have ever used it at work, and significantly more likely to be encouraged by managers to adopt it. And it’s not just about access, but also about perception. Women are more likely to worry about the risks of AI, question its accuracy, and even fear being judged for using it, including concerns that it might be seen as “cheating.”
Why this matters more than it seems
Chances are, this gap could compound fast. AI is quickly becoming a core workplace skill, and early adoption often translates into better opportunities. If one group is consistently using it less, or getting less credit for it, that gap can grow into real career disadvantages over time. And this isn’t happening in isolation. Broader research already shows women are underrepresented in tech and AI roles, meaning they’re not just using these tools less, but they’re also less involved in building them.
Atlantic Ambience / Pexels
What makes this interesting is how familiar it feels. This isn’t a new kind of bias; it’s an old one, just showing up in a new space. The same patterns seen in workplaces for decades, with less recognition, less encouragement, and more scrutiny, are now playing out in how AI is adopted and used.
Same bias, new tech?
As AI becomes a core workplace skill, even small gaps like this can snowball into missed opportunities, slower career growth, and less representation in shaping the tech itself. Because if the people using AI aren’t equally represented… the future it builds won’t be either.
If you’re online at all in 2026, you know it can feel like April Fools’ Day every day. You’ve almost certainly come across videos and content, often created with AI, and had to stop and ask yourself if what you’re looking at is true or made up.
Some are obvious. You mean, there aren’t really beds made of kittens, cotton candy and rubies? And I wasn’t really offered a job guarding a spooky funeral home where I might hear tapping coming from the morgue freezer at 3 a.m.? (Both of these are TikTok videos, and the AI is scarily good — and also just scary.)
As brands roll out their April Fools’ Day jokes for this year, I keep thinking that in an AI-heavy world, the jokes seem less surprising, the faked-up art less novel. Here are some highlights from this year’s list of April 1 corporate and tech jokes.
Advertisement
Fortnite: Big heads and llama riding
Here’s an April Fool’s prank that’s more than a joke, it’s real — but only temporary. Fortnite players can try out a 24-hour-only April Fool’s Day game update that throws some truly wacky changes into the popular game. Players get enormous heads, can ride on other players’ shoulders, can use finger guns that go “pew, pew,” make a splat sound when landing after a fall, and, perhaps best of all, rideable llamas have appeared.
Warhammer: The Musical
Hey, if Broadway can make a musical about Alexander Hamilton, or a bunch of cats, surely they can make one about the Warhammer universe? That’s the joke behind this trailer for The Emperor Protects: A Warhammer 40,000 Musical, the April 1 joke from Games Workshop, creator of the popular game world. The 2.5-minute trailer, with impressive costumes and music, really sells it.
Traeger: AI-powered grilling glasses
A
Advertisement
Screenshot by Gael Fashingbauer Cooper/CNET
This April 1 joke seems like it could maybe be a practical, real thing. Traeger makes wood-pellet grills, and this year’s joke is their claim to offer AI-powered grilling eyeglasses. “With smart guidance, thermal imaging, night‑vision, and hands‑free photo and video capture, MEAT‑AI lets you command every cook like never before,” the site touts. Hmm, I wouldn’t actually mind a pair of glasses that could look down at my grill and tell me whether my steak is done or how much more time it needs to cook. Get on that, Traeger.
T-Mobile cologne
Can you smell me now? Wait, wrong cellphone company.
T-Mobile
Want to smell like your cellphone? What does that even mean? Wireless tech giant T-Mobile’s prank is Metro by T-Mobile CALLoGNE, combining call, as in phone call, with cologne. The company touts its April 1 joke as “the world’s first luxury fragrance inspired by the unmistakable scent of a brand-new phone.” Metro is T-Mobile’s prepaid brand, formerly known as MetroPCS.
Advertisement
Timekettle British translation
They say the US and UK are two nations separated by a common language. You may already know some British phrases, including “boot” for what Americans call a car trunk, and “bonnet” for what we call the hood of a car. Timekettle makes AI-powered translation products, and its April 1 prank is a British-to-American language translation update for its translation devices. Cheerio, old chap.
Timekettle offers translation services, but the British English to American English version is a special April 1 joke.
Timekettle
Whisker cat hair clothing
Advertisement
From couture to cat hair, Whisker’s April 1 prank involves cat-hair clothing.
Whisker
If you own a cat, cat hair is already on everything in your closet. So Cataire (like couture, I guess), a line of designer clothing made out of real cat hair, doesn’t seem that far off. Whisker, the company behind the Litter-Robot litter box, is taking this April 1 prank to the meowy max. They’ve actually used real cat hair from adoptable cats at a Michigan animal shelter to adorn three sweaters that will later be sold on eBay. Each eBay listing doubles as an adoption profile for a real shelter cat.
Yahoo’s Scrōll Stoppr
Doomscrolling isn’t even a possibility with Yahoo’s thumb guard, ScrōllStoppr.
Advertisement
Yahoo
Those who spend too much time on their phones might appreciate the idea behind Yahoo’s prank, Scrōll Stoppr. It’s described as “a delightfully absurd finger accessory that physically blocks your thumb from touching your phone screen.” I hate to break it to Yahoo, but I discovered this myself years ago when I cut my thumb slicing onions for Thanksgiving and had to wrap it in a Band-Aid. Yahoo says you can actually buy this — it will be available for $5 on Yahoo TikTok Shop on April 1 and will be delivered in a box that sounds off with the Yahoo signature yodel. If it sells out, just put on a Band-Aid for the same results. BYO yodel.
Omaha Steaks pocket steak
Stake out a spot in your shirt for this pocket steak.
Omaha Steaks
Need a spot of protein on the go? Omaha Steaks is best known for sending giant crates of beef as gifts, but the company’s April 1 product is “the world’s first pocket-sized steak.” It gets beefier: The company jokes that the steak is cooked by motion-activated technology. A rare deal indeed, if well done.
Advertisement
Baskin-Robbins ice cream soup
Slurp up Baskin-Robbins April Fools’ Day joke, ice-cream soup.
Baskin-Robbins
Baskin-Robbins has always had creative ice cream flavors, but for April 1, the company is hyping… ice cream soup. Not real, of course, but they’re promoting the faux frozen dessert in hopes that people will be inspired to take advantage of a buy-one-get-one 50% off deal on pre-packed quarts April 1-2 for Baskin-Robbins Rewards Members. Slurp ’em if you got ’em.
Baby Bottle Pop, supplement style
Advertisement
Suck on this, say the makers of Baby Bottle Pop.
Baby Bottle Pop
Grown-ups don’t get any of the fun kid candy, but instead are stuck taking vitamins and supplements. Baby Bottle Pop Candy, which is exactly what it sounds like, candy in a baby-bottle container, is pretending for April 1 that it now comes in adult flavors. Is protein a flavor? Is fiber? Salmon is, but candy salmon is too much, even for this Seattleite. Thankfully, it’s just for April Fools’ Day.
NASA’s Artemis II Space Launch System (SLS) rocket and Orion spacecraft and the launch gantry at the Kennedy Space Center in Florida on March 31, 2026.
NASA/Keegan Barber
Fifty-four years after the last Apollo mission to the moon, NASA’s Artemis II mission is set to return. The Space Launch System rocket carrying the Orion spacecraft is scheduled to take off from the Kennedy Space Center in Florida on Wednesday afternoon. The four-person crew, made up of American and Canadian astronauts, will be 250,000 miles from Earth at its farthest point in the journey to orbit the moon. This is everything you need to know about NASA’s mission, its dreams for a future lunar base and this new age of space exploration.
How to watch Artemis II moon launch
Advertisement
Takeoff is scheduled for Wednesday at 6:24 p.m. ET / 3:24 p.m. PT from NASA’s Kennedy Space Center in Cape Canaveral, Florida. Delays are common during launches, especially due to weather, so we’ll keep this story updated if the takeoff time changes.
Here’s all the ways you can keep up with the Artemis II mission.
Advertisement
NASA
What to expect from this mission to the moon
The Artemis II mission is designed to orbit the moon on a 10-day trip. The astronauts will not be touching down on the moon’s surface this trip, but they will be testing the system’s life support systems for the first time, according to NASA. This mission also sets the stage for future Artemis missions, including Artemis IV, scheduled for 2028, which should put humans back on the moon.
We’ll be keeping up-to-date on all the latest Artemis II news, so check back here today and throughout the week for updates.
The giant funding round gives OpenAI a post-money valuation of $852bn.
Artificial intelligence company OpenAI has announced the closure of a recent funding round at $122bn, exceeding the projected figure of $110bn.
The round was backed by strategic partners Amazon, Nvidia and SoftBank, with continued participation from OpenAI’s long-term partner Microsoft. SoftBank co-led the round alongside a16z, DE Shaw Ventures, MGX, TPG and accounts advised by T Rowe Price Associates. There was also participation from several global institutions.
For the first time, OpenAI extended participation to investors through banking channels, raising more than $3bn from individual investors. The funding round gives OpenAI a post-money valuation of $852bn, the company said.
Advertisement
In a post about the announcement, OpenAI said, “This is commercial scale and it is mission scale. The fastest way to widen the benefits of AI is to put useful intelligence in people’s hands early and let that access compound globally.
“AI is driving productivity gains, accelerating scientific discovery and expanding what people and organisations can build. This funding gives us the resources to continue to lead at the scale this moment demands.”
The announcement comes at a time when OpenAI is calling a halt to specific features and products, as it aims to better manage costs and reprioritise resources. For example, plans for an erotic ChatGPT were reportedly put on hold indefinitely, as OpenAI elected to carry out additional research and to address concerns from staff and investors.
Additionally, in late March, the platform revealed plans to shut down controversial AI video generator Sora just a few months after announcing a multi-year licensing deal with Disney. OpenAI explained that bybending the feature, the organisation can redirect its focus onto other projects.
Advertisement
OpenAI is facing significant challenges from rivals in the AI space and recently news surfaced indicating the company’s plans to combine its AI chatbot, coding tool and web browser into a desktop ‘superapp’.
Sources noted that the move is intended to counter harsh competition from the AI giant’s rivals, such as Anthropic.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
Robinhood isn’t waiting to get sued in Washington state.
The financial services company filed a preemptive federal suit against Washington’s attorney general and gambling commission, arguing the state can’t use its gambling laws to shut down prediction market trading that it contends is authorized under federal commodities law.
The suit comes a few days after Washington Attorney General Nick Brown sued prediction market platform Kalshi in state court. The state takes the position that event contracts — which let users wager on the outcome of real-world activities ranging from NFL games to elections to the number of measles cases in a given year — amount to illegal gambling.
In its lawsuit, filed March 30 in U.S. District Court in Tacoma, Wash., Robinhood argues that federal law preempts Washington’s gambling statutes as applied to event contracts traded on exchanges regulated by the Commodity Futures Trading Commission.
Robinhood Markets, based in Menlo Park, Calif., is known for popularizing commission-free stock trading. The suit was filed by its Chicago-based subsidiary, Robinhood Derivatives.
Advertisement
The company, which is registered with the CFTC as a futures commission merchant, offers event contracts through the Kalshi and ForecastEx exchanges and says it plans to launch trading on a third exchange, Rothera, later this year, according to the complaint.
Pre-emptive move: The company points to the Kalshi suit and a December warning from the state Gambling Commission declaring prediction markets “unauthorized” as evidence that enforcement against the company is imminent.
The complaint was filed on behalf of Robinhood by the law firms Davis Wright Tremaine in Seattle and Cravath, Swaine & Moore in New York.
Robinhood’s suit cites Brown’s statement, at a press conference last week, that Kalshi is “just a bookie with a fancy name, and a huge amount of venture capital behind them.”
Advertisement
The suit says the company had “no choice but to file this lawsuit to protect its customers and its business.”
“[W]e believe in the power of prediction markets and the important role they play at the intersection of trading, news, economics, politics, culture, and sports,” a Robinhood spokesperson said via email, noting that the markets are federally regulated. “This step, consistent with our past actions in other jurisdictions, aims to preserve access for customers in Washington.”
GeekWire has reached out to the Washington AG’s office for comment.
Broader landscape: The case is part of a national wave of litigation over prediction markets. Kalshi is fighting more than 20 civil lawsuits, and Arizona’s AG filed criminal charges last month.
Advertisement
Courts are split on the issue. Federal judges in New Jersey and Tennessee, for example, have ruled that states cannot enforce their gambling laws against federally regulated prediction markets, while state courts in Massachusetts and Ohio have ruled that they can.
Washington state has staked out a broader position than other states in this fight, arguing that all event contracts — not just sports bets — are illegal under state law. Other states have focused their enforcement on sports-related contracts specifically.A bipartisan bill introduced last week by Sens. Adam Schiff (D-Calif.) and John Curtis (R-Utah) would ban sports betting on prediction market platforms.
Leaf blowers are pretty versatile tools for keeping your yard tidy, blowing snow off your car, or anything else that needs a healthy measure of forced air. Unfortunately for people with more space constraints, your average blower is pretty big and unwieldy. Additionally, for people with limited mobility options or noise limitations, a smaller, more compact leaf blower might be the ticket.
Advertisement
Enter the advent of the mini leaf blower. We all know that a full-size leaf blower packs some power, but how does a tiny handheld version perform? Does the reduction in size make it less useful? In this video, we take a couple mini leaf blowers purchased online through the ringer and see what each one is capable of.
Will the name brand come up on top, or will a lesser-known brand take the crown as the winner? More importantly, are mini leaf blowers even worth it compared to the full-size versions?
You must be logged in to post a comment Login