While Elon Musk faces off against his former colleague and OpenAI co-founder Sam Altman in court, Musk’s rival firm xAI, founded to take on OpenAI, isn’t slowing down on launching competitive new products and services.
The new products arrive after months of tumult from xAI that saw all of Musk’s 10 original co-founders of the lab and dozens more researchers exit the firm and Grok was eclipsed on performance by many new competing LLMs from the likes of OpenAI, Anthropic, Google, and Chinese firms DeepSeek, Moonshot (Kimi), Alibaba (Qwen), z.ai, and others.
While Grok 4.3 does mark a significant leap in performance on third-party benchmarks over its direct predecessor Grok 4.2, according to the independent AI model evaluation firm Artificial Analysis, it still remains below the state-of-the-art set by OpenAI and Anthropic’s latest models.
Advertisement
But the marquee feature of the Grok brand has — other than Musk’s stated opposition to “wokeness” and its more freewheeling personality and image generation policy — increasingly been its low price point when accessed by developers and users via the xAI application programming interface (API), a trend only furthered by Grok 4.3, which costs $1.25 per million input tokens and $2.50 per million output tokens (up to 200,000 input tokens, at which point costs double, a common pricing strategy of leading AI labs) compared to its direct predecessor Grok 4.2’s initial API pricing of $2/$6 per million input/output tokens.
Grok 4.3 API pricing screenshot. Credit: VentureBeat
According to xAI’s release notes, Grok 4.3 began beta testing in April for subscribers to xAI’s SuperGrok ($30 monthly) plan, and those of its sibling social network, X, through its Premium+ plan ($40 monthly with 50% for first two months). Now it’s available to all through the xAI API and through partner OpenRouter.
Reasoning baked-in and agentic tool-use capabilities
At the core of Grok 4.3 is a fundamental shift in how the model processes information. Unlike previous iterations where “chain-of-thought” or reasoning could often be toggled or configured by effort levels, Grok 4.3 is built with reasoning as an active, permanent state.
Advertisement
This means the model is designed to “think” before it speaks for every query, a strategy intended to maximize factual accuracy and the handling of complex, multi-step instructions.
The model’s memory is equally expansive, featuring a 1 million-token context window. To put this in perspective, a million tokens is roughly equivalent to several thick novels or the entire codebase of a mid-sized application.
This allows Grok 4.3 to maintain coherence over massive datasets, though xAI has implemented a “Higher context pricing” structure for requests that exceed the 200,000-token threshold.
This tiering suggests that while the “long-term memory” is available, the computational cost of managing that much information remains a significant overhead.Technically, the model accepts both text and image inputs, outputting text.
Advertisement
It is specifically optimized for agentic workflows—scenarios where an AI is not just answering a question but acting as an autonomous agent to complete a task.
For the first time, Grok has access to the same tools and environments a human professional would use. Evidence of this shift is visible in early user interactions:
Spreadsheet Engineering: In one instance, the model spent 6 minutes and 22 seconds in a “thought” phase to build a comprehensive OSRS Sailing Combat DPS analyzer. The resulting .xlsx file wasn’t a simple table but a multi-sheet dashboard including a “Reference_Data” set and a complex “DPS_Calculator” with formulaic auto-calculations.
Professional Documentation: Grok now generates formatted PDFs, such as 12-page reports on SpaceX products. These documents incorporate branding, logos, hero images, and structured tables, moving well beyond the markdown blocks of previous iterations.
Visual Presentations: The model can design 9-slide PowerPoint decks, utilizing a “Sandwich Structure” (dark titles/conclusions with light content) and integrating data-driven decision matrices and humor.
However, its knowledge of the world is not infinite; the release notes list a knowledge cut-off date of December 2025. Yet, thanks to built-in web search, Grok can reference and use up-to-date information.
In fact, Grok 4.3 arrives with an enhanced ecosystem of tools designed to make it a functional digital employee. The xAI platform now offers a robust set of server-side tools that the model can invoke autonomously based on the complexity of the query.
Advertisement
Web and X Search: These tools allow Grok to bypass its knowledge cutoff by browsing the live internet or searching X (formerly Twitter) posts, user profiles, and threads.
Code Execution: The model can run Python code in a sandboxed environment to solve mathematical problems or process data.
File and Collections Search: A built-in Retrieval-Augmented Generation (RAG) system allows users to query uploaded document collections or search through specific file attachments.
xAI’s Custom Voices let you clone your voice at high quality in a minute or two
Beyond text, xAI has introduced Custom Voices, a sophisticated voice-cloning API and web-based voice cloning creation suite.
This product allows developers to clone a voice from a reference audio clip as short as 120 seconds. Once cloned, the “voice ID” can be used across xAI’s Text-to-Speech (TTS) and Voice Agent APIs.
xAI’s documentation emphasizes that this is not merely about timbre; the model is designed to pick up delivery patterns.
If a user records a reference clip in a “customer support” style, the resulting AI voice will mimic that helpful, professional inflection.
Advertisement
Despite the creative potential, xAI has placed strict geographic limits on this feature, making it available only in the United States, with a notable exception for Illinois due to regional biometric and privacy regulations.
While the console playground is open for general use, programmatic access via the POST /v1/custom-voices endpoint is currently gated to teams on an Enterprise plan.
I tried it myself and after moving through the requisite voice sampling screens on the web — the tool asks you to read aloud several passages of unrelated dialog — I indeed had a copy of my voice that sounded eerily identical to mine and accurately pronounced new words the same way I would when reading allowed from a new script it was given.
You can delete your custom voices in one click on xAI’s Custom Voices web application and create up to 30 new ones at a time.
Advertisement
In terms of licensing, the Custom Voices feature is strictly “scoped to your team” and is never made available to other users, ensuring a private, commercial license for corporate assets.
Access to the new Voice Agent API (grok-voice-think-fast-1.0) is billed at a flat rate of $3.00 per hour ($0.05 per minute) for speech-to-speech interactions. This is on the low-medium end of costs for other competing voice agents, according to my research:
Service
Price per 1k Characters
Advertisement
Estimated Cost per Minute
Estimated Cost per Hour
OpenAI TTS (Standard)
$0.015
Advertisement
~$0.015
~$0.90
OpenAI TTS (HD)
$0.030
Advertisement
~$0.030
~$1.80
Grok Voice Agent
$0.05
Advertisement
$3.00
ElevenLabs (Starter)
~$0.30
~$0.30
Advertisement
~$18.00
ElevenLabs (Pro)
~$0.18
~$0.18
Advertisement
~$10.80
Play.ht
~$0.20
~$0.20
Advertisement
~$12.00
Azure/Google Cloud
$0.016 – $0.024
~$0.02
Advertisement
~$1.00 – $1.50
Complementing this is the standalone Text-to-Speech (TTS) service, which offers five distinct voices (Eve, Ara, Rex, Sal, and Leo) and is priced at $4.20 per 1 million characters.
For transcription needs, the Speech-to-Text (STT) API provides real-time streaming at $0.20 per hour, while batch processing is available at a discounted rate of $0.10 per hour.
To ensure security for client-side applications, xAI utilizes Ephemeral Tokens, allowing for secure WebSocket connections without exposing primary API keys.
Advertisement
Once created, these voices are private to the user’s team and can be used across all voice APIs by referencing a unique 8-character alphanumeric voice_id.
For highly regulated sectors, xAI maintains production-ready standards, including SOC 2 Type II auditing, HIPAA eligibility for healthcare workloads, and GDPR compliance.
Aggressively low API pricing as a differentiator
The most aggressive aspect of the Grok 4.3 announcement is its pricing structure. Bindu Reddy, CEO of enterprise assistant startup Abacus AI noted on X that the model is “as smart as Sonnet 4.6 and 5x cheaper and faster”.
The standard API rates are set at $1.25 per million input tokens and $2.50 per million output tokens. This reflects a significant reduction in cost compared to its predecessor, Grok 4.20, with Artificial Analysis reporting an approximately 40% lower input price and 60% lower output price.
Advertisement
According to our calculations at VentureBeat, that places Grok-4.3 firmly in the lowest cost half of all major foundation models, far closer to Chinese open source offerings than its U.S. proprietary rivals:
However, the “reasoning” nature of the model introduces a new billing category: Reasoning tokens.
Advertisement
These are tokens generated during the model’s internal thinking process and are billed at the same rate as standard completion tokens. Effectively, users pay for the AI to “think” before it provides the final answer. xAI has also introduced several unique fee structures:
Prompt Caching: Repeated prompts are significantly cheaper, at $0.20 per million tokens, incentivizing developers to reuse context.
Tool Invocations: While token usage for tools is billed at standard rates, the act of invoking a tool carries a flat fee—$5.00 per 1,000 calls for Web Search or Code Execution, and $10.00 for File Attachments.
Usage Guideline Violation Fee: In a move that may set a new industry precedent, xAI charges a $0.05 fee for requests that are blocked by their safety filters before generation even begins.
The model itself remains accessible via a standard commercial API, with xAI recommending that all developers migrate to grok-4.3 as their “most intelligent and fastest model”.
Third-party benchmark evaluations and analysis
The reception of Grok 4.3 has been polarized, depending largely on the specific use case. Professional benchmarkers and developers have highlighted a “stark gap” between the model’s domain-specific strengths and its general reasoning consistency.
According to independent AI evaluation firm Vals AI, Grok 4.3 has taken the top spot on several specialized indices. It currently ranks #1 on CaseLaw v2 (79.3% accuracy) and #1 on CorpFin.
Advertisement
This 25-point jump in legal reasoning over Grok 4.20 suggests that the “always-on reasoning” architecture is particularly well-suited for the dense, logical structures of law and finance.
Artificial Analysis corroborated this performance, noting a massive improvement in agentic tasks, scoring an Elo of 1500 on the GDPval-AA benchmark, surpassing competitors like Gemini 3.1 Pro and GPT-5.4 mini.
Conversely, users focused on general-purpose agents and coding have highlighted deficiencies.
They colorfully described the model as having “narcolepsy problems,” preferring to remain inactive for multiple simulation days rather than taking the required actions.
The sentiment was echoed by Vals AI, which noted that while the model improved in some coding areas, it remains weak on general coding tasks and “struggles with difficult math problems,” scoring only 11% on ProofBench.
Should your enterprise use Grok 4.3?
The launch of Grok 4.3 represents a calculated bet by xAI that the market wants specialized brilliance and extreme cost efficiency over a perfectly balanced generalist.
By achieving a score of 53 on the Artificial Analysis Intelligence Index while remaining on the “Pareto frontier” of cost-per-intelligence, xAI is positioning itself as the “value” leader for enterprise applications in legal and financial tech.
Advertisement
The “always-on reasoning” is a double-edged sword. While it provides the depth needed to navigate complex case law, the community reports of “narcolepsy” suggest that a model that is always “thinking” may occasionally think itself into a state of paralysis, or at least a state of excessive caution that inhibits agentic action.
For developers, the decision to adopt Grok 4.3 will likely come down to the nature of their data. For those needing to process a million tokens of legal documents at a fraction of the cost of Claude 4.6 or GPT-5.5, Grok 4.3 is a clear front-runner.
For those building high-frequency autonomous agents or complex math solvers, the “narcolepsy” and coding regressions suggest that xAI’s latest model may still need a few more “tuning passes”.
Advertisement
As OpenRouter noted on X upon making the model live, the “large jump in agentic performance” at a lower price point is an undeniable milestone. Whether that performance can be sustained across all domains remains the primary question for the summer of 2026.
Engineers at NASA’s Jet Propulsion Laboratory gathered around a special 26-foot vacuum chamber in February of last year to witness a prototype engine fire five times in a row. The temperature within that device skyrocketed, exceeding 5,000 degrees Fahrenheit, with a center tungsten electrode burning a brilliant white and an outer nozzle spewing out an astounding crimson stream of lithium plasma into the void of space.
Those short bursts of fire boosted the engine to a clean 120 kilowatts, a level not even the most powerful US-built electric propulsion system had ever achieved. High voltage electric currents rip through lithium vapor inside the engine, interact with a magnetic field, and suddenly the plasma blasts out of the nozzle at a breakneck pace. It everything works together to provide you with a consistent thrust without any flames or explosions, just a smooth and continuous push that accumulates over months or even years in space. The lithium works so well for the job because it ionizes cleanly, can run on lower voltages than other choices, and allows the system to pack a big punch into a small space once everything is scaled up.
BUILD AN OFFICIAL NASA ROCKET – Kids prepare to explore outer space with the LEGO Technic NASA Artemis Space Launch System Rocket (42221) building…
3-STAGE ROCKET SEPARATION – Young builders can turn the hand crank to watch the rocket separate in 3 distinct stages: solid rocket boosters, core…
STEM BUILDING TOY FOR KIDS – This educational rocket kit was created in collaboration with NASA and ESA to showcase the authentic system that will…
Senior Research Scientist James Polk from JPL is the driving force behind the operation, with assistance from his colleagues at Princeton University and NASA’s Glenn Research Center. They spent a long two and a half years planning and building the monster with funds from the space nuke propulsion program. Polk described the test as a “big stride forward” because they not only demonstrated that the engine works but also achieved their target power exactly as expected. The data acquired throughout those five test cycles is now being used to plan the next releases.
NASA has already tried electric propulsion on missions with a lot lower power outputs, such as the Psyche spacecraft. This one revolutionary design can deliver 25+ times more power than those units while utilizing only a fraction of the prop required for a chemical rocket. In general, these techniques will reduce fuel requirements by up to 90%, allowing spacecraft to launch much lighter and carry far more cargo or crew supplies.
Advertisement
Eventually, nuclear reactors will provide enough electricity to keep the thrusters going over extended distances. A mission to Mars, for example, will most certainly require 2-4 megawatts to get the entire crew there. Once this configuration is in place, the transits will be much shorter because the spaceship will be able to keep chugging along steadily for the entire journey rather than coasting for the majority of it.
Of course, before any of this can happen, the team must first conquer a number of hurdles. The components must be able to sustain heat for thousands of hours without failing, and the engines must be capable of producing at least 500 kilowatts, if not more. The testbed they’ve set up in the vacuum chamber at JPL provides a strong foundation for them to address those difficulties one by one.
Apple has quietly raised the desktop’s starting price to $799 after demand from developers building local AI tools cleared its shelves. Tim Cook says it could take months to catch up.
For five years, the Mac mini has been the cheapest way into Apple’s desktop ecosystem. Since the M4 refresh in late 2024, that price has been $599, an unusually aggressive figure for Cupertino, and one that turned the small aluminium box into a sleeper hit.
It became the recommended starter Mac, the home-server-of-choice for tinkerers, and, increasingly, the go-to local machine for developers running AI models on their own hardware.
As of Friday, the $599 Mac mini no longer exists.
Advertisement
The 💜 of EU tech
The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!
Apple has discontinued the 256GB configuration of the M4 Mac mini and made the 512GB model, which sells for $799, the new starting point. Bloomberg reported the change first, citing Apple’s own product pages, with confirmations following from MacRumors, 9to5Mac, Macworld, and AppleInsider. The pricing on each specification has not changed; the entry rung has simply been removed.
In effect, the Mac mini is $200 more expensive to buy in its base form than it was a day earlier.
Advertisement
Apple’s reasoning was made unusually explicit on this week’s Q2 earnings call. Tim Cook, the company’s chief executive, attributed shortages of both the Mac mini and the more powerful Mac Studio to demand that had outpaced internal forecasts, and tied that demand directly to AI workloads.
Both machines, he said, are “amazing platforms for AI and agentic tools, and the customer recognition of that is happening faster than what we had predicted.”
That recognition has a specific shape. The Mac mini and Mac Studio share a feature that has, in 2026, become unexpectedly valuable: large amounts of unified memory directly accessible to the GPU and Neural Engine on Apple’s M-series chips.
For developers running local large language models, agentic tools that orchestrate multi-step tasks on a single machine, or compact research setups that would otherwise require cloud GPUs, that memory architecture is a meaningful advantage. A 64GB Mac Studio costs less than the cheapest Nvidia H100, runs quietly on a desk, and does not bill by the hour.
Advertisement
The result has been the kind of run on inventory that hardware companies usually associate with launches, not nine-month-old products. Many higher-RAM configurations on Apple’s online store are listed as currently unavailable. The 16GB, 512GB Mac mini, the new entry-level model, is, by some retail accounts, backordered into June.
Behind the consumer-facing story is a less visible one about supply. The same advanced memory chips that ship in Mac minis and Mac Studios are also a critical input for the AI server farms being built by hyperscalers, and the imbalance between data-centre demand and global memory production has been intensifying for more than a year.
DRAM prices have risen sharply, and analysts have begun warning that consumer electronics manufacturers will increasingly find themselves second in line behind cloud providers willing to pay above market.
Cook acknowledged the constraint on the call, telling investors that supply-demand balance for both machines is “several months” away. He stopped short of predicting further price changes, but Notebookcheck and others have noted that the pattern, AI demand absorbs memory, memory becomes scarcer, consumer prices rise, is unlikely to be unique to Apple.
Advertisement
There is also a US manufacturing dimension to the story. The M4 Mac mini is one of the products Apple has begun assembling in part within the United States, and analysts at Technetbook and elsewhere have argued that some of the cost pressure on the entry tier reflects that shift rather than chip availability alone. Apple has not commented publicly on the relative weights of the two factors.
For most consumers, the change is a soft price rise dressed up as a product simplification. The 512GB Mac mini that used to be a $200 upgrade is now the floor. Anyone who would have bought the 256GB version, students, second-machine buyers, light office users, now pays more for storage they may not need.
For the segment Apple appears to be courting, the developer running Claude- or Llama-class models locally, the new entry tier is closer to a sensible minimum. 256GB of storage was always cramped for that workflow, and 512GB combined with 16GB of unified memory is a more honest starting point.
Either way, the broader signal is harder to miss. Apple, a company that historically holds prices steady through chip cycles, has just lifted a starting price by a third in response to AI-driven demand. The Mac mini is no longer a sleeper. It is, briefly and inconveniently, an AI workstation.
After the Foucault pendulum at the Houston Museum of Natural Science stopped working a while back after maintenance on the building, workers set out to determine what was wrong with the mechanism that normally keeps it in motion. Fortunately, it turned out that all they had to do was fiddle with some knobs to get everything dialed back in proper-like.
When we previously covered this dire event, it was claimed that this was a one-off system, hacked together by some random bloke. But as can be seen in the video and further detailed in the comments to the video the reality is far more interesting.
This particular Foucault pendulum is one of many that were created by the California Academy of Sciences, with hundreds of them installed throughout the US and possibly elsewhere. That said, since a pendulum of any description will never be a perpetual motion device, the electromagnet installed near the top of the installation has to carefully add some kinetic energy back that was lost due to friction as the pendulum moves around.
Advertisement
Sadly the video doesn’t go into much detail on what exactly was wrongly configured with this particular pendulum. Keeping a weight at the end of a long cable moving around at a set velocity is a tricky business, so it’s little wonder that getting some parameters wrong would engage and disengage the electromagnets at the wrong times and making the pendulum stop swinging.
Coatue, one of the biggest names in venture capital and hedge funds, has a new plan to generate bigger returns on AI beyond its sizable stakes in Anthropic, OpenAI, xAI, and data center companies like Singapore’s DayOne and CoreWeave.
It has launched a venture called Next Frontier to buy up land near large power sources with the goal of turning those parcels into data centers, the Wall Street Journal reports. Sources tell the WSJ that Next Frontier has already signed a joint venture with Fluidstack, a cloud infrastructure startup that penned a $50 billion deal to build data centers for Anthropic. (Coatue did not respond to a request for comment.)
Although the U.S. already has 3,000 data centers, more than 1,500 new ones are in various stages of being built, according to Pew Research, most of them in rural areas. The frenzy is enticing land speculation and data center financing projects from lots of players, ranging from Blackstone to Kevin O’Leary from “Shark Tank.”
Meta has acquired humanoid robotics startup Assured Robot Intelligence (ARI) for an undisclosed sum, the social media giant said.
“We acquired Assured Robot Intelligence, a company at the frontier of robotic intelligence designed to enable robots to understand, predict, and adapt to human behaviors in complex and dynamic environments,” a Meta spokesperson told TechCrunch in an emailed statement.
ARI’s team, including its co-founders, will join Meta’s AI unit, the Superintelligence Labs research division. ARI had raised an undisclosed seed round from AI seed firm AIX Ventures.
The startup was building foundation models for humanoid robots to perform all types of physical labor such as household chores. Co-founder Xiaolong Wang was previously a researcher at Nvidia, and an associate professor at UC San Diego, with a list of prestigious awards to his name. Co-founder Lerrel Pinto, who previously taught at NYU and co-founded the kid-size humanoid startup Fauna Robotics before Amazon snapped it up last month, has also won a string of prestigious awards.
Advertisement
ARI will help Meta with its humanoid ambitions. “This team, led by Lerrel Pinto and Xiaolong Wang, will bring a deep expertise in how we can design our models and frontier capabilities for robot control and self-learning to whole-body humanoid control.”
Even if Meta never releases a consumer humanoid product, many AI experts these days believe that the path to artificial general intelligence (AGI) — the theoretical point at which AI reaches or surpasses human-level intelligence across all domains — will require training AI models in the physical world, where robots learn through direct interaction rather than data alone.
Techcrunch event
Advertisement
San Francisco, CA | October 13-15, 2026
The ARI and Fauna deals reflect a broader industry sprint — one where forecasts vary wildly, from Goldman Sachs’ projection of $38 billion by 2035 to Morgan Stanley’s estimate of $5 trillion by 2050 — a spread that reflects both the enormous potential and the uncertainty around tech that’s still finding its footing.
Advertisement
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
Apple’s earnings call revealed a few things that make it easy to see what products we can and can’t expect between now and September. The “not coming” list is much longer than the “is probably coming” one.
The Mac is supply-constrained, the iPad isn’t being updated, and iPhones don’t release again until the fall. So, there’s not much left that could arrive in the intervening months.
The Mac mini, Mac Studio, and iMac are all awaiting their M5 upgrades, but Apple’s supply chain is already backed up quite a bit. You can’t purchase an M4 Mac mini if you wanted to.
Memory prices and scarce parts could mean a longer-than-usual wait for new Macs. It’s pretty safe to say based on Tim Cook’s remarks during earnings that there won’t be any through the summer.
Advertisement
The iPad is a gimme because Apple said one isn’t coming without directly saying so. During the earnings call, Apple made it clear that it would be a tough compare since the iPad with A16 was released a year ago.
So if you’re holding your breath for that new budget iPad with A19 and Apple Intelligence, you’ll be waiting a little while longer.
We’ve already got iPhone 17e, so there won’t be any new iPhones until September. Also, Apple Watch won’t get touched until then either.
iPhone, Apple Watch, and AirPods are done with updates for now
Advertisement
AirPods and AirPods Pro tend to be announced alongside iPhone too. AirPods Pro were just upgraded in September 2025, but if AirPods 5 are ready, those likely won’t be announced until the iPhone event.
Apple Vision Pro just got the M5 chip in October 2025 after about 20 months on the market, so that won’t be touched anytime soon. And no, that product line hasn’t been abandoned even if rumors attempted to say as much.
There is one product category Apple could touch upon due to its unpredictable release cycle.
Apple Home products are always possible
The Apple TV 4K is still rocking the A15 processor that first debuted in the iPhone 13 in 2021. It is still supported by Apple’s modern operating systems, but at nearly 5 years old, it’s time for an update.
Advertisement
It’s time for Apple to update the HomePods
Since Apple TV 4K is the brains of an Apple Home, it might make sense to make that product capable of Apple Intelligence. I know I’d appreciate the upgrade to my new smart home.
It might not be entirely relevant, but watchOS doesn’t even support the S5 chipset anymore. While HomePods run a version of tvOS, that does indicate exactly how old these chipsets are.
Advertisement
It might be time for Apple to do a basic chipset upgrade of the HomePod and HomePod mini. While they likely won’t support Apple Intelligence natively, it would do them good to have modern networking standards for use in Apple Home.
Those are the only Apple Home products Apple offers today, but there are some rumored products too.
Home Hub and cameras are unlikely
Apple is expected to debut what we’ve been calling the Home Hub tablet at some point in 2026. There are also Apple security cameras in the pipeline, or at least a doorbell, but that release window isn’t known.
Apple security cameras, doorbell, and Home Hub are all waiting on AI
Advertisement
WWDC 2026 is expected to be filled with announcements regarding Apple Intelligence. One of the biggest announcements will be about Siri and its new Apple Foundation Model backend.
That Siri upgrade is what the Home Hub has been waiting for. However, while Apple could show off the Home Hub during WWDC to demonstrate AI advancements, it is unlikely to put it for sale until later.
Since the Home Hub product and Apple doorbell don’t have an Apple-equivalent, the company can safely pre-announce them at any point. I believe WWDC would be the best place to demonstrate the Home Hub, but the already-packed event may not have room for it.
Likely nothing until the fall
Since Apple has a bundle of smart home products waiting in the wings, it is safe to assume there might be an Apple Home-focused event in the future. So, even if Apple TV and the HomePods are ready to go, Apple might hold off on them for now.
Advertisement
If you’ve been keeping count, that means we should all have little to no expectations for hardware before the iPhone event in September. While many are likely waiting for their pet product to get an update, they’ll just have to make do with WWDC instead.
The OS 27 cycle will be an important one for Apple. It will be among the first things released to the public under the new CEO John Ternus.
An odd rumor led to premature calls of Apple Vision Pro’s death, rumors of AI and Home Hubs abound, and Apple’s App Store troubles continue on the AppleInsider Podcast.
AppleInsider Managing Editor Mike Wuerthele joins host Wesley Hilliard as a guest this week to catch up on CEO transition news. It’s clear that the silly coverage surrounding the upcoming transition is already becoming exhausting.
The Apple vs Epic trial continues to be an ongoing event that seems to have no end. This time, Apple has to go to the Supreme Court and Circuit Courts at once.
Your hosts dive into the odd Apple Vision Pro rumor that said Apple had given up on the product. They discuss why this likely isn’t the case and how the Vision product line will continue forward.
Advertisement
There’s also a lot to discuss around incoming products like the Home Hub and security cameras. Wes asks Mike if Apple makes too many products.
The show concludes with a discussion around WWDC and Apple’s AI efforts.
BONUS: Subscribe via Patreon or Apple Podcasts to hear AppleInsider+, the extended edition. This week, Wes and Mike discuss their work at AppleInsider and some odds and ends surrounding that.
Links from the show:
More AppleInsider podcasts
Tune in to our Smart Home Insider podcast covering the latest news, products, apps and everything HomeKit related. Subscribe in Apple Podcasts, Overcast, or just search for HomeKit Insider wherever you get your podcasts.
Those interested in sponsoring the show can reach out to us at: [email protected].
Subscribe to AppleInsider on:
Keep up with everything Apple in the weekly AppleInsider Podcast. Just say, “Hey, Siri,” to your HomePod mini and ask for these podcasts, and our latest HomeKit Insider episode too. If you want an ad-free main AppleInsider Podcast experience, you can support the AppleInsider podcast by subscribing for $5 per month through Apple’s Podcasts app, or via Patreon if you prefer any other podcast player.
For years, California’s streets have hosted a quiet double standard: a human driver caught making an illegal U-turn got a ticket, but a driverless car doing the same thing got away with it, with perhaps a call to the manufacturer. That changes now.
The California DMV has announced what it calls the most important autonomous vehicle regulations in the United States. For the first time, self-driving cars can now be formally cited for breaking traffic laws (via Futurism).
ScreenshotTesla
What exactly can authorities do now?
Quite a lot, actually. Under the new rules, authorities can issue a “Notice of AV Noncompliance” directly to manufacturers whenever their autonomous vehicle (AV) commits a moving violation. All the notices add up as a formal paper trail that feeds into the DMV’s permit review process.
Beyond traffic citations, AV companies are bound to respond to first-responder calls within 30 seconds, provide access to manual override systems, and comply with emergency geofencing directives (clearing restricted zones within two minutes of being notified).
If self-driving carmakers fail to comply, they risk suspension of permits, fleet size restrictions, speed caps, and geographic operation limits, all of which could have a negative effect on the companies’ operations and revenue.
Advertisement
Waymo
Does this affect self-driving trucks, too?
The same set of regulations also opens California roads to heavy-duty self-driving vehicles for the first time, with new permits now available for trucks weighing over 10,000 pounds. Aurora, which has been operating autonomous freight trucks in Texas, has welcomed the development.
What’s good is that AV companies have until summer 2026 to comply with the new communication, after which, the DMV’s enforcement kicks in. Given that the robotaxi services in America are scaling quickly, establishing a citation system tied directly to operating permits could keep things in check.
The regulations, in totality, were partly inspired by a September 2025 incident in San Bruno, where police were powerless in front of a Waymo that had allegedly made an illegal U-turn, and by repeated cases of robotaxis clogging emergency response routes across San Francisco.
You must be logged in to post a comment Login