Cybersecurity researchers have uncovered a large-scale fraud operation that uses Telegram’s Mini App feature to run crypto scams, impersonate well-known brands, and distribute Android malware.
A new report by CTM360 says the platform, dubbed FEMITBOT, is based on a string found in API responses and uses Telegram bots and embedded Mini Apps to create convincing, app-like experiences directly within the messaging platform.
Telegram Mini Apps are lightweight web applications that run inside Telegram’s built-in browser, enabling services such as payments, account access, and interactive tools without requiring users to leave the app.
Abusing Telegram mini apps
According to a CTM360 report shared with BleepingComputer, the FEMITBOT platform is used to conduct multiple types of scams, including fake cryptocurrency platforms, financial services, AI tools, and streaming sites.
Advertisement
In various campaigns, threat actors impersonated widely recognized brands to increase credibility and engagement, while using the same backend infrastructure with different domains and Telegram bots.
Some of the brands impersonated in this campaign include Apple, Coca-Cola, Disney, eBay, IBM, Moon Pay, NVIDIA, YouKu,
Telegram Mini App impersonating NVIDIA Source: CTM360
Researchers say the activity uses a shared backend, where multiple phishing domains use the same API response, “Welcome to join the FEMITBOT platform,” indicating they are all using the same infrastructure.
API response found in FEMITBOT campaigns Source: CTM360
The operation uses Telegram bots to display phishing sites directly within the social media platform. When a user interacts with a bot and clicks “Start,” the bot launches a Mini App that displays a phishing page in Telegram’s built-in WebView, making it appear as part of the app itself.
Once inside, victims are shown dashboards with fake balances or “earnings,” often paired with countdown timers or limited-time offers to create a sense of urgency.
When users attempt to withdraw funds, they are prompted to make a deposit or complete referral tasks, a common tactic in investment and advance-fee scams.
Advertisement
The researchers say the infrastructure is designed to be used across different campaigns, allowing attackers to easily switch branding, languages, and themes.
The campaigns also use tracking scripts, such as Meta and TikTok tracking pixels, to track users’ activity, measure conversions, and likely to optimize performance.
Some Mini Apps also attempted to distribute malware in the form of Android APKs that impersonated brands like the BBC, NVIDIA, CineTV, Coreweave, and Claro.
Some of the Android APKs pushed by FEMITBOT Source: CTM360
Users are prompted to download Android APK files, open links within the in-app browser, or install progressive web apps that mimic legitimate software.
“The APK filenames are carefully chosen to resemble legitimate applications or use random-looking names that don’t immediately trigger suspicion,” explains CTM360.
Advertisement
“The APKs are hosted on the same domain as the API, ensuring TLS certificate validity and avoiding mixed-content warnings in the browser.”
Users should be cautious when interacting with Telegram bots that promote crypto investments or prompt them to launch Mini Apps, especially if they are asked to deposit funds or download apps.
As a general rule, Android users should avoid sideloading APK files, which are commonly used to distribute malware outside the Google Play Store.
AI chained four zero-days into one exploit that bypassed both renderer and OS sandboxes. A wave of new exploits is coming.
At the Autonomous Validation Summit (May 12 & 14), see how autonomous, context-rich validation finds what’s exploitable, proves controls hold, and closes the remediation loop.
ASUS is clearly going all-in on Snapdragon-powered creator machines, and its latest launch might be one of the most interesting yet. The new ProArt PZ14 is here, and it’s not just another 2-in-1. It’s ASUS trying to blend AI, portability, and serious creator-grade hardware into one compact device.
What makes the new ProArt PZ14 stand out?
The new ProArt PZ14 is a 14-inch detachable 2-in-1 built around the latest Snapdragon X2 Elite (X2E-88-100) chip, and that alone sets the tone. This is the successor to the ProArt PZ13, and it isn’t your typical thin-and-light. It’s an 18-core processor with up to 80 TOPS of AI performance, which means it’s built for tasks like on-device AI editing, rendering, and multitasking without relying heavily on the cloud.
ASUS
Then there’s the display, which honestly steals the show. ASUS has packed in a 14-inch Lumina OLED panel with a 144Hz refresh rate, 3K resolution, and excellent color accuracy aimed squarely at creators. The form factor is equally important here. It’s a detachable design with a stylus, keyboard, and stand, making it equally usable as a tablet or a laptop, depending on the workflow.
Is this the best creator laptop?
This device feels like ASUS positioning itself right in the middle of the AI PC transition. With Snapdragon chips gaining traction thanks to efficiency and AI capabilities, the ProArt PZ14 is clearly built to take advantage of that shift. It also checks all the boxes for creators on the move. It’s lightweight at around 0.79 kg, packs up to 32GB RAM and 1TB storage, and includes a fairly large 75Wh battery for a device this thin. The inclusion of Wi-Fi 7, USB4, and stylus support further reinforces that this is meant to be a flexible, all-in-one creative machine rather than just a secondary device.
ASUS
Right now, the ProArt PZ14 has launched in China, with ASUS confirming that a global rollout is coming soon. While exact timelines vary, earlier announcements suggest broader availability could follow in the coming months as part of ASUS’s wider 2026 lineup.
It doesn’t look like GameStop’s wild ride is stopping anytime soon, after the Wall Street Journal reported that the company is about to make an offer to acquire eBay. While an official offer hasn’t been submitted yet, WSJ said that GameStop could make a buyout offer for eBay “as soon as later this month.”
The WSJ noted that GameStop’s market value sat at around $11 billion, while eBay towered over it with a $45 billion market value, as of Friday’s market close. The report didn’t have details on the potential offer, but WSJ said that Cohen could also take the offer directly to eBay’s shareholders instead if eBay isn’t receptive.
It’s important to note that the company’s CEO, Ryan Cohen, could receive a $35 billion in stock if he meets certain criteria, including increasing GameStop’s market value to $100 billion. Acquiring eBay could also be a part of Cohen’s plans to evolve GameStop beyond its reputation as a video games and collectibles retailer.
Advertisement
However, the company has experienced plenty of ups and downs in recent history. In 2022, GameStop attempted to build a marketplace for non-fungible tokens that ultimately shuttered a couple of years later. More recently, GameStop announced its plans to pivot towards retro gaming at select locations. While the company is still throwing ideas at the wall and seeing what sticks, it also closed down more than 400 retail locations across the US earlier this year.
Apple will reveal more Apple Intelligence features than ever before during WWDC, but they will continue to stay out of the user’s way. Those that don’t want AI can just ignore it or turn it off.
We’re only a few weeks away from WWDC 2026, so the internal leaks have begun in earnest. While I’m sure Apple Intelligence and AI will play a major role at the event, I also expect Apple to respect its user base.
Unless something dramatic has changed at Apple, and no, I’m not talking about a CEO transition, I doubt Apple’s stance on AI has shifted. Ever since its first big AI event at WWDC 2024, Apple has made it clear that it views AI as a tool that should be in the background and on device.
Of course, Apple did want to emphasize that it had AI at all, so that’s where the rainbow Siri interface and various elements in features like Writing Tools came from. This approach is what earned Apple the “behind” label from pundits.
Advertisement
That label carries a lot of weight, especially when it isn’t properly defined. Apple definitely doesn’t have a competitive tool for image or video generation, nor does it have a chatbot. It also doesn’t have a can opener on the iPhone, so I could also say Apple is behind in that department as well.
The reality is that Apple’s hardware ecosystem is above and beyond what most other companies are offering in the AI space today. If rumors are correct about what is coming in iOS 27, Apple will be a powerhouse in the space that can’t be ignored.
But the AI itself? It can be ignored.
It’ll be there, but not in your face
AI is and should have always been treated as a background task that users have no business knowing about or interacting with directly. Imagine if the industry had the same reaction to the first successful machine learning decision tree.
Advertisement
Sure, the Photos editing tools will get some new AI features like extending beyond the frame, changing the perspective of a Spatial Photo, or an AI-powered enhance tool. Most iPhone users don’t even open the edit pane or even know that it is there, but know that some of the tools will have an AI backend with iOS 27.
Visual Intelligence is apparently moving to the Camera app as a toggle. I’m willing to bet that the toggle can be hidden, especially since the feature can still be launched by long-pressing Camera Control. Either way, don’t want it? Don’t use it.
Siri is being revamped with a new backend powered by Apple Foundation Models, but users don’t need to know that. They’ll still be able to play music, set timers, or make calls with the assistant as usual. Those that want to can go further by entering into longer chatbot-like conversations, but it isn’t a requirement.
I could go on, but I believe I’ve made my point.
Advertisement
Apple is the only company doing AI right. It is a background tool that can do some very interesting things, but it isn’t meant to be the product itself.
Apple doesn’t need AI to succeed
What is most interesting about Apple’s place in the AI race is that it has proven it doesn’t need AI at all. The iPhone’s growing popularity is the key indicator. So, seeing Apple slowly grow its AI feature set even if it doesn’t really need it is very interesting.
Apple’s position in the AI race may soon become irrelevant as it hosts all of its rivals
If anything, AI needs Apple to succeed.
Advertisement
One of the more significant features of iOS 27, that yes, can still be ignored, will be the ability to call out to any third-party tool. For example, if a user wants to have a query go through Claude, it could designate the Claude app as an endpoint. Anthropic would support this action through an API.
It means that Apple Foundation Models powering Apple Intelligence could drive on-device functions and Private Cloud Compute, but where needed, users could choose to target other models on their own. This would also mean not needing some kind of partnership with other companies like OpenAI to pull it off.
While I wish we had some of these AI features to play with today, I’m excited for what the summer beta cycle might provide. WWDC 2026 is nearly a month away, so we don’t have long to wait.
Relive the magic of the 1980s by stepping inside a classic Japanese arcade and playing “Tetris” on the Apple Vision Pro.
Tetris may not be the first video game, but it’s hard to think of any other franchise that is as iconic. In fact, Tetris ranks number two on the best-selling video game franchise list, seconded only to everyone’s favorite plumber, Mario.
And now you can relive the magic of classic Tetris on the Apple Vision Pro, thanks to Retrocade.
This isn’t technicallyTetris’ first appearance on Retrocade. Initially, the classic title featured as an Easter egg in the in-game back office.
Advertisement
But now it’s joined the growing list of classic titles showing up on Retrocade. Currently, Tetris is exclusive to Retrocade for the Apple Vision Pro, and comes with a brand new Japanese arcade environment.
Resolution Games’ Retrocade was added to Apple Arcade in February. As the name implies, it’s an app that aims to give the arcade experience to a modern audience by including a selection of classic titles to play.
Currently the list of games available for Retrocade include:
Asteroids
Bubble Bobble
Breakout
Centipede
Dig Dug
Frogger
Galaga
Haunted Castle
Pac-Man
Space Invaders
Tempest
Tetris
Track & Field
While most games are also available for iPhone and iPad, Tetris is exclusively available for the Apple Vision Pro. Retrocade is available via an Apple Arcade subscription, which costs $6.99 per month or $49.99per year.
Apple Arcade can be shared with up to six family members. It is also included in every Apple One tier.
As a Star Wars fan and someone who grew up with Lego as my favourite toy, Lego Star Wars sets have become some of my aspirational purchases as an adult — partly because the larger sets are expensive on their own, and I also don’t have the room to display all the sets in my wish list in my small apartment.
So it’s a good thing that Amazon has slashed prices on several Lego Star Wars sets in celebration of May the Fourth, including the ones on my wish list, but there are offers on the flashy new Smart Play sets that were unveiled at CES earlier this year. With Star Wars Day here, even those are now quite affordable.
There are also a lot of other sets across a variety of price ranges that have discounts too. So go on, indulge yourself and May the Fourth be with you.
For years, the smartphone chip conversation has been pretty straightforward. A phone with Snapdragon inside was almost always assumed to be the better option. If it had Exynos or MediaTek, the reaction was usually more doubtful. Qualcomm earned its reputation over time, but by 2026, that hierarchy no longer feels as solid.
MediaTek’s last couple of Dimensity 9000-series chips have been going neck and neck with Snapdragon 8-series SoCs, while Exynos has typically trailed behind both. Now, though, the race has become a lot more interesting.
My recent time with the Galaxy S26, powered by Exynos 2600, has already surprised me in terms of performance. And once you widen the lens to include the Snapdragon 8 Elite Gen 5 in the S26 Ultra and the Dimensity 9500 in devices like the Oppo Find X9, the whole “Snapdragon automatically equals better” idea starts showing some cracks.
Benchmark
Galaxy S26 (Exynos 2600)
Galaxy S26 Ultra (Snapdragon 8 Elite Gen 5)
Oppo Find X9 (Dimensity 9500)
AnTuTu Total
3,101,654
3,638,265
3,512,048
Geekbench 6 Single-Core
3,036
3,524
3,207
Geekbench 6 Multi-Core
10,534
10,823
9,345
3DMark Wild Life Extreme
6,366
6,519
7,142
3DMark Wild Life Extreme Stress Test Stability (%)
53.5
63.2
54.9
Temperature After Stress Test (°C)
40.2
38.7
39.2
Galaxy S26 was a pleasant surprise
The easiest surprise here is that the Exynos 2600 does not show up as some obvious weak link. In my testing, the base Galaxy S26 put up 3,036 single-core and 10,534 multi-core in Geekbench 6, plus an AnTuTu score of 2,859,177. Historically, Samsung released its flagship devices in two variants. Historically, Samsung released its flagship devices in two variants. North America, China, and Japan got Snapdragon versions, while the rest of the world got Exynos processors. The company faced a lot of criticism for that split because older flagship models on Exynos chips often fell behind their Snapdragon-powered counterparts.
Advertisement
Nadeem Sarwar / Digital Trends
That, along with chip production yield issues, pushed Samsung to make a few generations of Galaxy S phones exclusively with Snapdragon processors. But it looks like Exynos is back. On 3DMark Wild Life Extreme, the Galaxy S26 scored 6366. The stress-test results are a little more mixed, delivering 53.5% stability in the stress test. These are healthy numbers for a smaller flagship, especially one many people were probably ready to dismiss the moment they saw “Exynos” on the spec sheet.
The S26 Ultra is faster, but not by much
The Galaxy S26 Ultra still has advantages, and that’s not really surprising. Its Wild Life Extreme Stress Test posted a best loop score of 6,519 and 63.2% stability, helped by its larger vapor chamber cooling setup. So yes, the overall thermal performance was better, but not by the kind of margin that completely changes the conversation when you compare it with the standard S26. In both AnTuTu and Geekbench, the Galaxy S26 Ultra led the pack. Exynos still lags a bit, but the gap is no longer the kind you would notice in ordinary day-to-day performance.
Nadeem Sarwar / Digital Trends
The S26 Ultra is clearly faster, but the difference is nowhere near as dramatic as older Snapdragon-versus-Exynos comparisons used to be. Especially when you compare the GeekBench scores, the performance is almost identical. Even without the upgraded cooling setup, the Galaxy S26 managed to stay surprisingly close to the S26 Ultra in the stress test. Where the Ultra does pull ahead more clearly is stability, which matters more once you start talking about sustained performance under load.
MediaTek is the part that makes the race fun
The Dimensity 9500 in the Oppo Find X9 Pro is what really makes this conversation interesting. Its Geekbench 6 single-core score of 3,203 beats the base Galaxy S26, while its AnTuTu score of 3,512,048 edges ahead as well. On 3DMark Wild Life Extreme, it posted 7,142, which puts it above both the S26 and the S26 Ultra.
Oppo
MediaTek is no longer showing up as the “other” flagship chip brand. It is putting up top-tier numbers and staying in the same conversation as Qualcomm and Samsung’s in-house silicon. For a long time, Dimensity chips were seen as the more budget-friendly alternative powering cheaper mid-range and entry-level phones. Results like these show how much ground MediaTek has made up at the high end. There is still a weak point here, which is the 54.9% stress-test stability, which trails the S26 Ultra.
Snapdragon still makes excellent chips, and the S26 Ultra proves that easily. But reputation alone is no longer a substitute for looking at the actual results. The Exynos 2600 has enough performance to not fall behind anymore, and the Dimensity 9500 is close enough in raw horsepower to make the flagship chip race feel properly competitive again.
A new Quordle puzzle appears at midnight each day for your time zone – which means that some people are always playing ‘today’s game’ while others are playing ‘yesterday’s’. If you’re looking for Sunday’s puzzle instead then click here: Quordle hints and answers for Sunday, May 3 (game #1560).
Quordle was one of the original Wordle alternatives and is still going strong now more than 1,400 games later. It offers a genuine challenge, though, so read on if you need some Quordle hints today – or scroll down further for the answers.
Enjoy playing word games? You can also check out my NYT Connections today and NYT Strands today pages for hints and answers for those puzzles, while Marc’s Wordle today column covers the original viral word game.
Advertisement
SPOILER WARNING: Information about Quordle today is below, so don’t read on if you don’t want to know the answers.
Article continues below
Quordle today (game #1561) – hint #1 – Vowels
How many different vowels are in Quordle today?
• The number of different vowels in Quordle today is 4*.
* Note that by vowel we mean the five standard vowels (A, E, I, O, U), not Y (which is sometimes counted as a vowel too).
Advertisement
Quordle today (game #1561) – hint #2 – repeated letters
Do any of today’s Quordle answers contain repeated letters?
• The number of Quordle answers containing a repeated letter today is 2.
Quordle today (game #1561) – hint #3 – uncommon letters
Do the letters Q, Z, X or J appear in Quordle today?
• No. None of Q, Z, X or J appear among today’s Quordle answers.
A friend asked ChatGPT for input on a professional matter and received a banal, lackluster response. I suggested she try a different approach: ask for 15 different ideas, scan them, pick the two that felt most promising, and then ask ChatGPT to refine. She came back overjoyed. ChatGPT had not gotten smarter, but she became better at prompting.
This is my favorite gambit: ask AI for many options, delve deeper into the promising ones, and most importantly, if at first you don’t succeed, prompt, prompt again!
What follows is practical advice on how to use AI as a power tool rather than a slot machine. For a simple request, it’s overkill, but if you’re serious about prompting, read on.
Anthropic’s own guidance for prompting Claude contains a helpful hint: treat the model as a brilliant but literal-minded new employee on their first day. They are capable. They are also new. They will do exactly what you ask, so you have to ask exactly what you want.
Advertisement
The Anthropic team’s golden rule is to show your prompt to a colleague with no context and ask whether they could follow it. If the answer is no, the model can’t either. This principle generates a handful of habits that lift output quality immediately, before any of the more advanced techniques come into play.
One caveat from me, though: don’t think of the model as a person. It’s not. The “brilliant new employee” framing is a useful starting point, but it’s a metaphor, not reality. A new hire asks follow-up questions, remembers what you said yesterday, and notices when an instruction is dumb. Claude does none of that by default. Lean on the metaphor to remember to be specific and provide context, but drop it the moment you start to expect human judgment that just isn’t there.
Here’s the playbook, organized as a list for easy reference and periodic review.
Be specific about format, length, audience, and constraints.
Advertisement
Vague prompts produce vague output. The fix is to say what you actually want.
Before: Write about marketing trends.
After: Analyze the three most significant B2B SaaS marketing trends from the past six months. For each, give one company example and a one-sentence assessment of whether the trend will accelerate or plateau. Write it as a 400-word brief for a non-technical board.
Improving prompt quality is often simply stating constraints. Vague prompts produce safe, hedged, encyclopedic answers because the model has no signal about what to optimize for and defaults to coverage. Specific prompts produce opinionated, useful answers because the constraints eliminate the safe-but-useless options. Asking for “three” instead of “some” forces ranking. Asking for “accelerate or plateau” forces a call. Asking for “a board brief” determines what gets cut. Each constraint you add is a decision the model no longer gets to dodge.
Provide a few examples.
This is the highest-leverage move in everyday prompting. Models pick up patterns from examples faster than from descriptions.
Before: Turn these meeting notes into action items.
After: Turn these meeting notes into action items. Match this format: Example 1: Note: “Sarah will look into the pricing question and get back to us next week.” Action item: Sarah → research pricing options → due next Friday. Example 2: Note: “We agreed to push the launch.” Action item: Team → revise launch timeline → due before Monday’s standup. Now do the same for these notes: [paste]
Tell the model what to do, not what not to do.
Negative instructions are easier to violate than positive ones. Reframing in the affirmative gets you cleaner results.
Advertisement
Before: Don’t be too formal. Don’t use jargon. Don’t make it boring.
After: Write in a warm, conversational tone, the way a smart colleague would explain this over coffee. Use plain English and short sentences.
Match the style of your prompt to the style of the output you want.
This one surprises some people. If your prompt is full of bullets and bold text, the model will return bullets and bold text. If you want flowing prose, write in flowing prose.
These habits sound modest. But applied together, they take prompts from the level my friend was operating at, where ChatGPT seemed unhelpful, to a level where AI yields dividends left and right. The advanced techniques in the rest of this piece build on this foundation, but they won’t rescue a prompt that fails the basics.
Beyond the basics, here is a set of effective habits that show up in guidance from OpenAI, Google, working developers, and the people who build production AI systems for a living. These are not techniques so much as workflow disciplines.
Iterate; treat prompting as test-driven.
Advertisement
Your first prompt is a draft. The most experienced practitioners build small sets of test cases (the inputs they care about), run their prompt across them, and refine until the output is consistently good. Several open-source toolkits exist to formalize this loop.
Before: Write the prompt. Try it on one example. Looks good. Ship it.
After: Write the prompt. Pick five inputs, including the awkward edge cases. Run the prompt on all five. Where it fails, change one thing in the prompt and retest. Keep the version that works on the most cases.
Specify a definition of done.
OpenAI’s own guidance for GPT-5 stresses telling the model what counts as a finished answer. Without that, the model decides for itself, often by stopping at the first plausible-looking response.
Before: Help me debug this Python error.
After: Help me debug this Python error. You are done when: (1) you have identified the root cause, (2) you have proposed a specific fix with the corrected code, and (3) you have explained why the original failed. If you are not confident on any of those three, say so explicitly rather than guessing.
Calibrate effort to the task.
Modern reasoning models have effort or thinking dials. Low effort for extraction and triage; high for synthesis and strategy. Most users leave them on default and pay for it on hard problems.
Before: Summarize this 80-page report.
After: Set thinking effort to high. Read the entire report. Identify the three most important findings, the two weakest claims, and the one question I should ask the authors. Cite page numbers.
Inject current or proprietary context directly.
Be careful to avoid jargon and abbreviations unknown to the model (instead of the acronym PMO, say “Project Management Office”). Models don’t have access to your internal documents. Paste in the relevant material.
Advertisement
Before: How should I structure a related work section comparing my framework to prior agent governance proposals?
After: Below is my current draft related work section, plus PDFs of the three papers I am positioning against (pasted). Based only on these sources, identify points of overlap I have not yet acknowledged and any claims in my draft that the cited papers would not actually support.
Build a personal prompt library.
This is a power move for a pro. The patterns that worked yesterday are likely to work tomorrow. Stop rewriting them from scratch. Save the prompts that consistently produce good results, organized by task type. Treat them as living documents, not one-off attempts.
Before: Open a new chat. Type out the framing, the constraints, the examples, and the question from memory. Watch yourself forget two of them.
After: Open your prompt library. Copy the “draft a memo for my manager” template. Paste in today’s specific topic and source material. Run.
Here are some key don’ts:
Don’t tell reasoning models to “think step by step.”
Models like OpenAI’s o-series and GPT-5 thinking already do that internally. Adding the instruction can hurt rather than help. Save it for the everyday models.
Don’t lean on “do not” or “never” instructions for everything.
Advertisement
Models, especially Gemini, can over-index on broad negative constraints and degrade on basic reasoning. Prefer positive framing: tell the model what to do.
Don’t trust polished prose as evidence of correctness.
Hallucinations are most dangerous when they are well-written. As I pointed out in How to Read with AI, you have to carefully verify AI output.
Don’t use aggressive language (“CRITICAL: You MUST…”).
Advertisement
Modern models are highly responsive to ordinary instructions. Aggressive phrasing can produce overcautious output and triggers refusals. Use normal language.
Don’t include undefined acronyms in your prompt.
They measurably degrade output. For research on the impact of prompt changes see this recent paper on Brittlebench.
Don’t change three things at once when iterating.
Advertisement
When a prompt isn’t working, change one variable, test, then change the next. Otherwise you don’t know what helped.
Don’t assume that the same prompt works across models.
Different model families need different prompting. The same instruction can help one and hurt another. The temperature and effort settings that work for GPT are not the ones that work for Claude or Gemini.
Don’t treat the first answer as the final one.
Advertisement
Failing to iterate is a common failure mode in everyday AI use. Here’s a trick for making AI better at multi-step tasks: after each attempt, have the AI write a short critique of what went wrong and tuck that note into its memory for the next try. No fancy mechanics, just the model “talking to itself” in plain English. On the next attempt, it reads its own past reflections and adjusts. This loop can produce meaningful gains over one-shot prompts.
The people who get the most out of AI aren’t the ones with the best prompt templates. They’re the ones who treat the model as a powerful tool for advancing their work. You don’t need to show up with perfect clarity about what you want. A good dialog can get you there, surfacing options and questions you’d have missed on your own. What it can’t do is recognize the right answer when it appears. That part is still on you.
Editor’s note: GeekWire publishes guest opinions to foster informed discussion and highlight a diversity of perspectives on issues shaping the tech and startup community. If you’re interested in submitting a guest column, email us at tips@geekwire.com. Submissions are reviewed by our editorial team for relevance and editorial standards.
Welcome back to TechCrunch Mobility — your central hub for news and insights on the future of transportation. To get this in your inbox, sign up here for free — just click TechCrunch Mobility!
We’re going to do a bit of a deep dive today, which may make this newsletter look a little different than normal. There is a reason!
This newsletter is not region-specific, but sometimes there are policies at the state level that have widespread implications for tech companies and startups alike. Which brings me to California and the new autonomous vehicle testing and deployment rules issued this week by the state’s Department of Motor Vehicles.
There are two new sets of rules — collectively 100 pages long — that cover requirements for the testing and deployment of AVs. I spent the past few days speaking to engineers and policy folks working at AV companies and discovered that they have strong opinions and few want to speak publicly about it. But thanks to the public commentary period on these regulations, we have some insight into what the industry supported and what it did not.
Advertisement
The regulations include new, more robust requirements for data collection and sharing, training, and operations. Here are a few items that stuck out and what insiders told me.
How do you ticket a robotaxi? Under these new rules, law enforcement can cite AV companies for traffic violations committed by their vehicles. The rule, called “Notice of Autonomous Vehicle Noncompliance,” requires the manufacturer (meaning the robotaxi company) to report the violation to the DMV within 72 hours of receiving it from law enforcement.
I’ve heard a number of interpretations of this rule and how it will be implemented, but it appears there is not a monetary fine attached to these violations. Instead, these violations are another piece of data that the DMV can use to identify problems and take action if needed.
Techcrunch event
Advertisement
San Francisco, CA | October 13-15, 2026
Insiders told me that the data is actionable and more important than a monetary fine. My question: Why not both?
Advertisement
The good news for industry: The DMV will now allow heavy-duty vehicles equipped with autonomous vehicle tech to test and eventually deploy on public roads. Self-driving truck companies are happy with this outcome. Daniel Goff, VP of external affairs at Kodiak, told me the company is already working on the required documentation to apply for a permit.
The burden for the industry: The word that came up in every conversation I had with someone in the AV industry was “burdensome.” And it was always used in reaction to the new data collection and sharing regulations.
Goodbye, disengagement reports; hello, malfunctions: Others were happy to see annual disengagement reporting disappear. Disengagement reports, which detailed instances when human drivers had to take over control due to technology failures or safety concerns, have been controversial because companies use varying standards. This has made it impossible to compare the results or rate the proficiency of autonomous vehicle technology.
That entire section has been removed and replaced with a requirement to report “dynamic driving task performance relevant system failure.” This may seem like semantics — trading one jargony phrase for another. Insiders tell me that while it is not a perfect metric, it is clearer than its predecessor. That doesn’t mean it is beloved either.
Advertisement
There is a lot more in these documents, including a requirement to provide annual updates to first responder interaction plans, access to manual vehicle override systems, two-way communication links with 30-second response times, and updated training requirements to ensure safe and timely interactions with first responders.
My question for you, reader, is whether these rules go too far or if they are appropriate and provide the kind of reporting and data collection needed to keep these companies accountable? Sign up for the Mobility newsletter to vote in our polls!
A little bird
Image Credits:Bryce Durbin
We had a lot of little birds talk to us about the new California AV rules, so nothing new to add here. But remember, you can always send us tips. Here’s how.
BMW i Ventures launched a new $300 million fund with a timely thesis: AI will reshape how the automotive industry operates. The fund will invest in early-stage through Series B startups in North America and Europe that are working on agentic AI and physical AI as well as industrial software, advanced materials, and manufacturing and supply-chain technologies. This third fund brings the firm’s total capital under management to $1.1 billion.
Other deals that got my attention …
Advertisement
Sereact, a German robotics startup, raised $110 million in a Series B funding round led by VC Headline. Other investors include Bullhound Capital, Felix Capital, Daphni, Air Street Capital, Creandum, and Point Nine.
Spirit Airlines is preparing to shut down after failing to secure a $500 million lifeline from the government, the WSJ reports. The company is expected to cease operations around 3 a.m. ET Saturday.
Notable reads and other tidbits
Image Credits:Bryce Durbin
China suspended issuing new licenses for autonomous vehicles after dozens of Baidu’s Apollo Go robotaxis suddenly stopped last month, Bloomberg reported.
Faraday Future paid around $7.5 million to a company controlled by its founder, Jia Yueting, in 2025, senior reporter Sean O’Kane discovered in a recent SEC filing.
Advertisement
Rivian reported earnings this week and one item that stood out to us — and to many others — was the downsizing of its DOE loan from $6.6 billion to $4.5 billion. That loan restructuring comes with changes to its Georgia factory. Instead of two 200,000-vehicle capacity structures on the Georgia site, Rivian will now build a 300,000-vehicle capacity factory and leave the adjacent “pad” untouched and ready for future development. Analysts didn’t necessarily view this as negative but did position this as rightsizing. Barclays, for instance, views the modification as Rivian adjusting to the current EV environment, according to a research note published Friday. Barclays also stated it didn’t believe Rivian currently plans to build the second plant at Georgia, “at least not until early/mid next decade.”
Tesla launched a Semi-Charging for Business program, which includes a new product called the Basecharger that is designed for depot and overnight use.
Uber has tapped Hertz to clean, charge, and fix its Lucid Motors robotaxis. This announcement left us with a cheeky question: How many companies does it take to launch a robotaxi service?
Uber customers in the United States can now book hotels directly through the app, one of several new features announced this week that pushes far beyond the company’s original ride-hailing purpose and even deeper into its users’ lives. At launch, Uber customers will have access to more than 700,000 hotels worldwide through a partnership with Expedia Group, the travel company that Uber CEO Dara Khosrowshahi led for 12 years.
Advertisement
Vay, a remote driving tech startup, says it has grown its fleet to 175 vehicles on the road and has surpassed 60,000 rides.
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
Saveitforparts picked up a heavy Sigma 400mm XQ lens and its matching 2x teleconverter at a local thrift store for $14.99. He carried the whole assembly home, attached a simple adapter, and mounted it on a decade-old Sony NEX-3 digital camera. The goal looked simple on paper: point the bargain rig skyward when the International Space Station passed overhead and record whatever showed up in the frame.
He began by checking the pass predictions on n2yo.com and the NASA Spot the Station app to determine when and where the ISS would cross the sky above his position. Timing was critical since the ISS moves so quickly that one second too late leaves you with nothing but an empty blue sky. He placed the lens on its built-in tripod foot, approximately oriented it along the projected route, and simply waited for the bright little dot formed by sunlight reflecting off the station’s solar panels.
Superior Optics: 400mm(f/5.7) focal length and 70mm aperture, fully coated optics glass lens with high transmission coatings creates stunning images…
Magnification: Come with two replaceable eyepieces and one 3x Barlow lens.3x Barlow lens trebles the magnifying power of each eyepiece. 5×24 finder…
Wireless Remote: This refractor telescope includes one smart phone adapter and one Wireless camera remote to explore the nature of the world easily…
Getting a clear focus was difficult from the start, as dirt had gathered inside the old lens over time, dispersing light and destroying the details. So he began by practicing on the moon, adjusting the focus until the craters appeared sharp enough to see through the viewfinder. Getting the shutter speed and exposure exactly right required a delicate balance; too slow and the station blurred into a streak, too quick and the small dot vanished against the sky.
The first few passes produced exactly what he expected: a small white speck dead center in each frame. He switched to a Canon HF G70 camcorder with a 2.2x telephoto converter, which produced video rather than stills, but the results were still unimpressive. He was able to re-capture the station as a moving blob, occasionally catching a glimpse of its center body and extending solar arrays when the alignment and illumination conditions were exactly right.
He was fired up and undeterred, so he focused on solar transits. These events last less than 4 seconds and occur low on the horizon, 12,000 kilometers away from the station. He put a pair of solar viewing glasses over the camcorder lens to block out some of the sun’s brightness, programmed the camera to capture a series of photos in quick succession at 1/250th of a second, and waited for the projected moment to arrive. But the atmospheric haze from the morning added another layer of distortion, making it a little difficult, but he still managed to capture a few frames that showed the station as a clear black speck traveling across the sun’s disk. Taking those frames and putting them together in software proved that the timing was exactly what the transit calculation predicted.
Every attempt revealed the same limitations, with the total focal length reaching a whopping 800mm on the still camera and even more on the camcorder, but the station remained too far away to capture any full-on structural details. Hand-tracked motions were a pain; there was no fancy motorized mount to speak of, so any tiny movement during the limited window could put the target straight out of view. A full moon transit was cut short by a bank of clouds and a lot of bad luck. [Source]
You must be logged in to post a comment Login