Connect with us

Tech

Trump FTC Threatens Apple With A Fake Investigation Into Its Nonexistent ‘Liberal News Bias’

Published

on

from the fake-investigations,-real-harm dept

Here we go again.

The Trump FTC has threatened Apple and CEO Tim Cook with a fake investigation claiming that Apple News doesn’t do a good enough job coddling right wing, Trump-friendly ideology.

The announcement and associated letter pretends that Apple is violating Section 5 of the FTC Act (which “prohibits unfair or deceptive acts or practices”) because it’s not giving right wing propaganda outlets the same visibility as other media in the Apple News feed (which the letter falsely claims are “left wing”):

“Recently, there have been reports that Apple News has systematically promoted news
articles from left-wing news outlets and suppressed news articles from more conservative
publications. Indeed, multiple studies have found that in recent months Apple News has chosen not to feature a single article from an American conservative-leaning news source, while simultaneously promoting hundreds of articles from liberal publications.”

This is all gibberish and bullshit. Their primary evidence is a shitty article from Rupert Murdoch’s right wing rag The New York Post, which in turn leans on a laughable study by the right wing Media Research Center. That “study” looked at a small sample size of 620 articles promoted by Apple News, randomly and arbitrarily declared 440 of them as having a “liberal bias,” and then concluded Apple was up to no good.

Advertisement

Among the outlets derided as “liberal” sits papers like the Washington Post, which has been tripping over itself to appease Trump and become, very obviously, more right wing and corporatist than ever under its owner Jeff Bezos, who recently vastly overpaid Donald Trump’s wife to make a “documentary” about her.

The FTC’s fake investigation obviously violates the First Amendment. Even if it were true that Apple was biased in what sources it had in Apple News (which the evidence doesn’t actually support), that’s… still legal, based on Apple’s First Amendment rights. If the Biden FTC had gone after Fox News for “anti-liberal bias” everyone (including many Democrats) would call out the obvious First Amendment problem. But even ignoring the First Amendment problems of all this, claiming that this is covered by Section 5 is laughable. I’ve watched for years as the FTC has struggled to legally defend genuine investigations into obvious corporate instances of very clear fraud and still come out on the losing end due to the murky construction of the law.

This inquiry has no legal legs to stand on.

I suspect FTC boss Andrew Ferguson is leaving soon and wanted an opportunity to put his name in lights across the right wing propaganda echoplex as somebody who is “doing something to combat the wokes” with a phony investigation, much like the FCC’s Brendan Carr does. It’s likely this is mostly being driven by partisan ambition.

Advertisement

There doesn’t need to be any legally supporting evidence (or hell even an actual investigation), the point is to have the growing parade of right-wing friendly media make it appear as if key MAGA zealots are doing useful things in service of the cause. And to threaten companies with costly and pointless headaches if they don’t pathetically bend the knee to Trumpism (which Cook has been very good at so far).

So while the “investigation” may be completely bogus, the threat of it still has a dangerous impact on free expression in a country staring down the barrel of authoritarianism. Somewhere, Tim Cook is shopping around for another shiny bauble to throw at the feet of our mad, idiot king.

Here’s where I’ll mention that if you ask an actual, objective media scholar here on planet Earth, they’ll be quick to inform you that U.S. media and journalism pretty consistently has a center-right, corporatist bias.

As the ad-driven U.S. media consolidates under corporate control, it largely functions less and less as a venue for real journalism and informed democratic consensus, and more as either an infotainment distraction mechanism to keep the plebs busy, or as a purveyor of corporate-friendly agitprop that coddles the narratives surrounding unchecked wealth accumulation by the extraction class.

Advertisement

From the Washington Post to CBS, from Twitter to TikTok, to consolidation among local right wing broadcasters, the U.S. right wing is very clearly buying up U.S. media in the pursuit of the same sort of autocratic state television we’ve seen arise in countries like Russia and Hungary.

This effort is propped up by an endless barrage of claims that the already corporatist, center-right U.S. press is secretly left wing, and that the only solution is to shift the editorial Overton window even further to the right. These folks genuinely will not be satisfied until the entirety of U.S. media resembles the sort of fawning, mindless agitprop we see in countries like North Korea.

This is not hyperbole. They’re building it right in front of your noses. It’s yet to be seen if fans of free speech, democratic norms, and objective reality can muster any sort of useful resistance.

Advertisement

Filed Under: andrew ferguson, apple, bias, first amendment, free speech, ftc, journalism, media, propaganda, section 5, tim cook

Companies: apple

Source link

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Data Center Sustainability Metrics: Hidden Emissions

Published

on

In 2024, Google claimed that their data centers are 1.5x more energy efficient than industry average. In 2025, Microsoft committed billions to nuclear power for AI workloads. The data center industry tracks power usage effectiveness to three decimal places and optimizes water usage intensity with machine precision. We report direct emissions and energy emissions with religious fervor.

These are laudable advances, but these metrics account for only 30 percent of total emissions from the IT sector. The majority of the emissions are not directly from data centers or the energy they use, but from the end-user devices that actually access the data centers, emissions due to manufacturing the hardware, and software inefficiencies. We are frantically optimizing less than a third of the IT sector’s environmental impact, while the bulk of the problem goes unmeasured.

Incomplete regulatory frameworks are part of the problem. In Europe, the Corporate Sustainability Reporting Directive (CSRD) now requires 11,700 companies to report emissions using these incomplete frameworks. The next phase of the directive, covering 40,000+ additional companies, was originally scheduled for 2026 (but is likely delayed to 2028). In the United States, the standards body responsible for IT sustainability metrics (ISO/IEC JTC 1/SC 39) is conducting active revision of its standards through 2026, with a key plenary meeting in May 2026.

The time to act is now. If we don’t fix the measurement frameworks, we risk locking in incomplete data collection and optimizing a fraction of what matters for the next 5 to 10 years, before the next major standards revision.

Advertisement

The limited metrics

Walk into any modern data center and you’ll see sustainability instrumentation everywhere. Power usage efficiency (PUE) monitors track every watt. Water usage efficiency (WUE) systems measure water consumption down to the gallon. Sophisticated monitoring captures everything from server utilization to cooling efficiency to renewable energy percentages.

But here’s what those measurements miss: End-user devices globally emit 1.5 to 2 times more carbon than all data centers combined, according to McKinsey’s 2022 report. The smartphones, laptops, and tablets we use to access those ultra-efficient data centers are the bigger problem.

Data center operations, as measured by power usage efficiency, account for only 24 percent of the total emissions.

On the conservative end of the range from McKinsey’s report, devices emit 1.5 times as much as data centers. That means that data centers make up 40 percent of total IT emissions, while devices make up 60 percent.

Advertisement

On top of that, approximately 75 percent of device emissions occur not during use, but during manufacturing—this is so-called embodied carbon. For data centers, only 40 percent is embodied carbon, and 60 percent comes from operations (as measured by PUE).

Putting this together, data center operations, as measured by PUE, account for only 24 percent of the total emissions. Data center embodied carbon is 16 percent, device embodied carbon is 45 percent, and device operation is 15 percent.

Under the EU’s current CSRD framework, companies must report their emissions in three categories: direct emissions from owned sources, indirect emissions from purchased energy, and a third category for everything else.

This “everything else” category does include device emissions and embodied carbon. However, those emissions are reported as aggregate totals broken down by accounting category—Capital Goods, Purchased Goods and Services, Use of Sold Products—but not by product type. How much comes from end-user devices versus datacenter infrastructure, or employee laptops versus network equipment, remains murky, and therefore, unoptimized.

Advertisement

Embodied carbon and hardware reuse

Manufacturing a single smartphone generates approximately 50 kg CO2 equivalent (CO2e). For a laptop, it’s 200 kg CO2e. With 1 billion smartphones replaced annually, that’s 50 million tonnes of CO2e per year just from smartphone manufacturing, before anyone even turns them on. On average, smartphones are replaced every 2 years, laptops every 3 to 4 years, and printers every 5 years. Data center servers are replaced approximately every 5 years.

Extending smartphone lifecycles to 3 years instead of 2 would reduce annual manufacturing emissions by 33 percent. At scale, this dwarfs data center optimization gains.

There are programs geared towards reusing old components that are still functional and integrating them into new servers. GreenSKUs and similar initiatives show 8 percent reductions in embodied carbon are achievable. But these remain pilot programs, not systematic approaches. And critically, they’re measured only in data center context, not across the entire IT stack.

Imagine applying the same circular economy principles to devices. With over 2 billion laptops in existence globally and 2-3-year replacement cycles, even modest lifespan extensions create massive emission reductions. Extending smartphone lifecycles to 3 years instead of 2 would reduce annual manufacturing emissions by 33 percent. At scale, this dwarfs data center optimization gains.

Advertisement

Yet data center reuse gets measured, reported, and optimized. Device reuse doesn’t, because the frameworks don’t require it.

The invisible role of software

Leading load balancer infrastructure across IBM Cloud, I see how software architecture decisions ripple through energy consumption. Inefficient code doesn’t just slow things down—it drives up both data center power consumption and device battery drain.

For example, University of Waterloo researchers showed that they can reduce 30 percent of energy use in data centers by changing just 30 lines of code. From my perspective, this result is not an anomaly—it’s typical. Bad software architecture forces unnecessary data transfers, redundant computations, and excessive resource use. But unlike data center efficiency, there’s no commonly accepted metric for software efficiency.

This matters more now than ever. With AI workloads driving massive data center expansion—projected to consume 6.7-12 percent of total U.S. electricity by 2028, according to Lawrence Berkeley National Laboratory—software efficiency becomes critical.

Advertisement

What needs to change

The solution isn’t to stop measuring data center efficiency. It’s to measure device sustainability with the same rigor. Specifically, standards bodies (particularly ISO/IEC JTC 1/SC 39 WG4: Holistic Sustainability Metrics) should extend frameworks to include device lifecycle tracking, software efficiency metrics, and hardware reuse standards.

To track device lifecycles, we need standardized reporting of device embodied carbon, broken out separately by device. One aggregate number in an “everything else” category is insufficient. We need specific device categories with manufacturing emissions and replacement cycles visible.

To include software efficiency, I advocate developing a PUE-equivalent for software, such as energy per transaction, per API call, or per user session. This needs to be a reportable metric under sustainability frameworks so companies can demonstrate software optimization gains.

To encourage hardware reuse, we need to systematize reuse metrics across the full IT stack—servers and devices. This includes tracking repair rates, developing large-scale refurbishment programs, and tracking component reuse with the same detail currently applied to data center hardware.

Advertisement

To put it all together, we need a unified IT emission-tracking dashboard. CSRD reporting should show device embodied carbon alongside data center operational emissions, making the full IT sustainability picture visible at a glance.

These aren’t radical changes—they’re extensions of measurement principles already proven in data center context. The first step is acknowledging what we’re not measuring. The second is building the frameworks to measure it. And the third is demanding that companies report the complete picture—data centers and devices, servers and smartphones, infrastructure and software.

Because you can’t fix what you can’t see. And right now, we’re not seeing 70 percent of the problem.

From Your Site Articles

Advertisement

Related Articles Around the Web

Source link

Continue Reading

Tech

Anthropic’s Sonnet 4.6 matches flagship AI performance at one-fifth the cost, accelerating enterprise adoption

Published

on

Anthropic on Tuesday released Claude Sonnet 4.6, a model that amounts to a seismic repricing event for the AI industry. It delivers near-flagship intelligence at mid-tier cost, and it lands squarely in the middle of an unprecedented corporate rush to deploy AI agents and automated coding tools.

The model is a full upgrade across coding, computer use, long-context reasoning, agent planning, knowledge work, and design. It features a 1M token context window in beta. It is now the default model in claude.ai and Claude Cowork, and pricing holds steady at $3/$15 per million tokens — the same as its predecessor, Sonnet 4.5.

That pricing detail is the headline that matters most. Anthropic’s flagship Opus models cost $15/$75 per million tokens — five times the Sonnet price. Yet performance that would have previously required reaching for an Opus-class model — including on real-world, economically valuable office tasks — is now available with Sonnet 4.6. For the thousands of enterprises now deploying AI agents that make millions of API calls per day, that math changes everything.

Sonnet-4.6-ComputerUse-OSWorld-OverTime-2x

Anthropic’s computer use scores have nearly quintupled in 16 months. The company’s latest model, Sonnet 4.6, scored 72.5 percent on the OSWorld-Verified benchmark, up from 14.9 percent when the capability first launched in October 2024. (Source: Anthropic)

Advertisement

Why the cost of running AI agents at scale just dropped dramatically

To understand the significance of this release, you need to understand the moment it arrives in. The past year has been dominated by the twin phenomena of “vibe coding” and agentic AI. Claude Code — Anthropic’s developer-facing terminal tool — has become a cultural force in Silicon Valley, with engineers building entire applications through natural-language conversation. The New York Times profiled its meteoric rise in January. The Verge recently declared that Claude Code is having a genuine “moment.” OpenAI, meanwhile, has been waging its own offensive with Codex desktop applications and faster inference chips.

The result is an industry where AI models are no longer evaluated in isolation. They are evaluated as the engines inside autonomous agents — systems that run for hours, make thousands of tool calls, write and execute code, navigate browsers, and interact with enterprise software. Every dollar spent per million tokens gets multiplied across those thousands of calls. At scale, the difference between $15 and $3 per million input tokens is not incremental. It is transformational.

The benchmark table Anthropic released paints a striking picture. On SWE-bench Verified, the industry-standard test for real-world software coding, Sonnet 4.6 scored 79.6% — nearly matching Opus 4.6’s 80.8%. On agentic computer use (OSWorld-Verified), Sonnet 4.6 scored 72.5%, essentially tied with Opus 4.6’s 72.7%. On office tasks (GDPval-AA Elo), Sonnet 4.6 actually scored 1633, surpassing Opus 4.6’s 1606. On agentic financial analysis, Sonnet 4.6 hit 63.3%, beating every model in the comparison, including Opus 4.6 at 60.1%.

These are not marginal differences. In many of the categories enterprises care about most, Sonnet 4.6 matches or beats models that cost five times as much to run. An enterprise running an AI agent that processes 10 million tokens per day was previously forced to choose between inferior results at lower cost or superior results at rapidly scaling expense. Sonnet 4.6 largely eliminates that trade-off.

Advertisement

In Claude Code, early testing found that users preferred Sonnet 4.6 over Sonnet 4.5 roughly 70% of the time. Users even preferred Sonnet 4.6 to Opus 4.5, Anthropic’s frontier model from November, 59% of the time. They rated Sonnet 4.6 as significantly less prone to over-engineering and “laziness,” and meaningfully better at instruction following. They reported fewer false claims of success, fewer hallucinations, and more consistent follow-through on multi-step tasks.

Sonnet-4.6-Eval-Table-Blog-Highlight-A-2x

Anthropic’s Sonnet 4.6, a mid-tier model, matches or approaches the performance of the company’s flagship Opus line across most benchmark categories — and frequently outperforms rival models from Google and OpenAI. (Source: Anthropic)

How Claude’s computer use abilities went from ‘experimental’ to near-human in 16 months

One of the most dramatic storylines in the release is Anthropic’s progress on computer use — the ability of an AI to operate a computer the way a human does, clicking a mouse, typing on a keyboard, and navigating software that lacks modern APIs.

When Anthropic first introduced this capability in October 2024, the company acknowledged it was “still experimental — at times cumbersome and error-prone.” The numbers since then tell a remarkable story: on OSWorld, Claude Sonnet 3.5 scored 14.9% in October 2024. Sonnet 3.7 reached 28.0% in February 2025. Sonnet 4 hit 42.2% by June. Sonnet 4.5 climbed to 61.4% in October. Now Sonnet 4.6 has reached 72.5% — nearly a fivefold improvement in 16 months.

Advertisement

This matters because computer use is the capability that unlocks the broadest set of enterprise applications for AI agents. Almost every organization has legacy software — insurance portals, government databases, ERP systems, hospital scheduling tools — that was built before APIs existed. A model that can simply look at a screen and interact with it opens all of these to automation without building bespoke connectors.

Jamie Cuffe, CEO of Pace, said Sonnet 4.6 hit 94% on their complex insurance computer use benchmark, the highest of any Claude model tested. “It reasons through failures and self-corrects in ways we haven’t seen before,” Cuffe said in a statement sent to VentureBeat. Will Harvey, co-founder of Convey, called it “a clear improvement over anything else we’ve tested in our evals.”

The safety dimension of computer use also got attention. Anthropic noted that computer use poses prompt injection risks — malicious actors hiding instructions on websites to hijack the model — and said its evaluations show Sonnet 4.6 is a major improvement over Sonnet 4.5 in resisting such attacks. For enterprises deploying agents that browse the web and interact with external systems, that hardening is not optional.

Enterprise customers say the model closes the gap between Sonnet and Opus pricing tiers

The customer reaction has been unusually specific about cost-performance dynamics. Multiple early testers explicitly described Sonnet 4.6 as eliminating the need to reach for the more expensive Opus tier.

Advertisement

Caitlin Colgrove, CTO of Hex Technologies, said the company is moving the majority of its traffic to Sonnet 4.6, noting that with adaptive thinking and high effort, “we see Opus-level performance on all but our hardest analytical tasks with a more efficient and flexible profile. At Sonnet pricing, it’s an easy call for our workloads.”

Ben Kus, CTO of Box, said the model outperformed Sonnet 4.5 in heavy reasoning Q&A by 15 percentage points across real enterprise documents. Michele Catasta, President of Replit, called the performance-to-cost ratio “extraordinary.” Ryan Wiggins of Mercury Banking put it more bluntly: “Claude Sonnet 4.6 is faster, cheaper, and more likely to nail things on the first try. That combination was a surprising combination of improvements, and we didn’t expect to see it at this price point.”

The coding improvements resonate particularly given Claude Code’s dominance in the developer tools market. David Loker, VP of AI at CodeRabbit, said the model “punches way above its weight class for the vast majority of real-world PRs.” Leo Tchourakov of Factory AI said the team is “transitioning our Sonnet traffic over to this model.” GitHub’s VP of Product, Joe Binder, confirmed the model is “already excelling at complex code fixes, especially when searching across large codebases is essential.”

Brendan Falk, Founder and CEO of Hercules, went further: “Claude Sonnet 4.6 is the best model we have seen to date. It has Opus 4.6 level accuracy, instruction following, and UI, all for a meaningfully lower cost.”

Advertisement
Sonnet-4.6-Money-balance-over-time-2x

In a simulated business environment, Sonnet 4.6 nearly tripled the earnings of its predecessor over the course of a year, suggesting sharply improved decision-making in complex, long-horizon tasks. (Source: Anthropic, Vending-Bench Arena)

A simulated business competition reveals how AI agents plan over months, not minutes

Buried in the technical details is a capability that hints at where autonomous AI agents are heading. Sonnet 4.6’s 1M token context window can hold entire codebases, lengthy contracts, or dozens of research papers in a single request. Anthropic says the model reasons effectively across all that context — a claim the company demonstrated through an unusual evaluation.

The Vending-Bench Arena tests how well a model can run a simulated business over time, with different AI models competing against each other for the biggest profits. Without human prompting, Sonnet 4.6 developed a novel strategy: it invested heavily in capacity for the first ten simulated months, spending significantly more than its competitors, and then pivoted sharply to focus on profitability in the final stretch. The model ended its 365-day simulation at approximately $5,700 in balance, compared to Sonnet 4.5’s roughly $2,100.

This kind of multi-month strategic planning, executed autonomously, represents a qualitatively different capability than answering questions or generating code snippets. It is the type of long-horizon reasoning that makes AI agents viable for real business operations — and it helps explain why Anthropic is positioning Sonnet 4.6 not just as a chatbot upgrade, but as the engine for a new generation of autonomous systems.

Advertisement

Anthropic’s Sonnet 4.6 arrives as the company expands into enterprise markets and defense

This release does not arrive in a vacuum. Anthropic is in the middle of the most consequential stretch in its history, and the competitive landscape is intensifying on every front.

On the same day as this launch, TechCrunch reported that Indian IT giant Infosys announced a partnership with Anthropic to build enterprise-grade AI agents, integrating Claude models into Infosys’s Topaz AI platform for banking, telecoms, and manufacturing. Anthropic CEO Dario Amodei told TechCrunch there is “a big gap between an AI model that works in a demo and one that works in a regulated industry,” and that Infosys helps bridge it. TechCrunch also reported that Anthropic opened its first India office in Bengaluru, and that India now accounts for about 6% of global Claude usage, second only to the U.S. The company, which CNBC reported is valued at $183 billion, has been expanding its enterprise footprint rapidly.

Meanwhile, Anthropic president Daniela Amodei told ABC News last week that AI would make humanities majors “more important than ever,” arguing that critical thinking skills would become more valuable as large language models master technical work. It is the kind of statement a company makes when it believes its technology is about to reshape entire categories of white-collar employment.

The competitive picture for Sonnet 4.6 is also notable. The model outperforms Google’s Gemini 3 Pro and OpenAI’s GPT-5.2 on multiple benchmarks. GPT-5.2 trails on agentic computer use (38.2% vs. 72.5%), agentic search (77.9% vs. 74.7% for Sonnet 4.6’s non-Pro score), and agentic financial analysis (59.0% vs. 63.3%). Gemini 3 Pro shows competitive performance on visual reasoning and multilingual benchmarks, but falls behind on the agentic categories where enterprise investment is surging.

Advertisement

The broader takeaway may not be about any single model. It is about what happens when Opus-class intelligence becomes available for a few dollars per million tokens rather than a few tens of dollars. Companies that were cautiously piloting AI agents with small deployments now face a fundamentally different cost calculus. The agents that were too expensive to run continuously in January are suddenly affordable in February.

Claude Sonnet 4.6 is available now on all Claude plans, Claude Cowork, Claude Code, the API, and all major cloud platforms. Anthropic has also upgraded its free tier to Sonnet 4.6 by default. Developers can access it immediately using claude-sonnet-4-6 via the Claude API.

Source link

Advertisement
Continue Reading

Tech

A YouTuber’s $3M Movie Nearly Beat Disney’s $40M Thriller at the Box Office

Published

on

Mark Fischbach, the YouTube creator known as Markiplier who has spent nearly 15 years building an audience of more than 38 million subscribers by playing indie-horror video games on camera, has pulled off something that most independent filmmakers never manage — a self-financed, self-distributed debut feature that has grossed more than $30 million domestically against a $3 million budget.

Iron Lung, a 127-minute sci-fi adaptation of a video game Fischbach wrote, directed, starred in, and edited himself, opened to $18.3 million in its first weekend and has since doubled that figure worldwide in just two weeks, nearly matching the $19.1 million debut of Send Help, a $40 million thriller from Disney-owned 20th Century Studios. Fischbach declined deals from traditional distributors and instead spent months booking theaters privately, encouraging fans to reserve tickets online; when prospective viewers found the film wasn’t screening in their city, they called local cinemas to request it, eventually landing Iron Lung on more than 3,000 screens across North America — all without a single paid media campaign.

Source link

Continue Reading

Tech

How the uninvestable is becoming investable

Published

on

Venture capital has long avoided ‘hard’ sectors such as government, defence, energy, manufacturing, and hardware, viewing them as uninvestable because startups have limited scope to challenge incumbents. Instead, investors have prioritised fast-moving and lightly regulated software markets with lower barriers to entry.

End users in these hard industries have paid the price, as a lack of innovation funding has left them stuck with incumbent providers that continue to deliver clunky, unintuitive solutions that are difficult to migrate from.

However, that perception is now shifting. Investors are responding to new market dynamics within complex industries and the evolving methods startups are using to outperform incumbents. 

Government technology spending more than doubled between 2021 and 2025, while defence technology more than doubled in 2025 alone, and we see similar trends in robotics, industrial technology, and healthcare.  This signals a clear change in investor mindset as AI-first approaches are changing adoption cycles.

Advertisement

Shifting investor priorities

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

Historically, those sectors had been viewed as incompatible with venture capital norms: slow procurement cycles, strict regulation, heavy operational and capital requirements, as well as deep integration into physical systems.

For example, on a global level, public procurement can take years, constrained by budgeting cycles, legislation and accountability frameworks. Energy projects must comply with regulatory regimes and national permitting structures, and infrastructure and hardware deployments require extensive certification and long engineering cycles.

Advertisement

Government and public-sector buyers also tend to prioritise reliability, compliance, track record, and legacy relationships over speed, meaning major contracts often go to incumbents rather than startups. Similar dynamics exist in construction, mining, logistics, and manufacturing, all sectors that are still dominated by legacy vendors, complex supply chains, and thin operational margins.

That view is now changing. Capital is increasingly flowing into sectors once seen as too bureaucratic or operationally complex. Beyond those headline sectors, construction tech, industrial automation, logistics software, healthcare, and public-sector tooling are attracting record levels of early-stage and growth capital.

The key question is: what is driving this shift in investor thinking?

What’s causing the change

The shift in investor priorities is partly driven by macro and geopolitical pressures. Supply-chain disruption, energyinsecurity and infrastructure fragility have elevated industrial resilience to a national priority. Governments are investing heavily in grid modernisation, logistics networks, and critical infrastructure, while public institutions face mounting pressure to digitise procurement, compliance, and workflow systems.

Advertisement

What were once considered slow, bureaucratic markets are now seen as structurally supported by policy and long-term demand.

 AI is the driving force and is reshaping traditionally hard industries. By lowering the cost of building sophisticated software and enabling immediate performance gains with shorter adoption cycles, AI allows startups to compete with incumbents in sectors such as construction, mining, manufacturing, logistics, and public services from day one.

As software becomes easier to replicate, defensibility is shifting toward operational depth, substantial UI/UX improvements, speed to market, and more seamless integration into complex real-world systems.

Finally, saturation in horizontal SaaS has pushed investors to look elsewhere for differentiated returns. Crowded softwarecategories offer diminishing breakout potential and are often threatened by OpenAI and Anthropic’s fast pace of innovation, whereas regulated and infrastructure-heavy sectors provide less competition, stronger pricing power, higher switching costs, and gigantic TAMs.

Advertisement

SAP with a $200B cap is just an example, but the same goes for Caterpillar, Siemens, Big Utilities, Big Pharma, and many others.

Regulation, once viewed as a deterrent, is increasingly understood as a moat. Startups that successfully navigate procurement frameworks, compliance regimes, and industry standards build advantages that are difficult for new entrants to replicate and cannot be vibe-coded.

Founders leading the way

Legacy players, while trying to adopt new AI tooling as quickly as they can, still struggle to adapt their workflows and scale innovation as quickly as younger companies can. Their dominance has relied on the high cost of switching away from their solutions, but as attention and investment shift toward hard sectors, incumbents can no longer rely solely on their brand reputation.

Even industry leaders like Salesforce are relying more on acquisitions to keep up, showing how new technology and easier alternatives are lowering switching costs and making it harder for established companies to hold onto customers.

Advertisement

Startups are also increasingly being built by innovative industry specialists, who aren’t confined by the same limitations as legacy players. Many startup founders in defence, energy, healthcare, and government procurement come directly from these industries or have unique insights into their inner weaknesses.

Startups moving the narrative

The next wave of disruption will hit legacy companies hard, as startups prove they can innovate with the speed, flexibilityand focus that incumbents often lack. In sectors long protected by regulation or procurement friction, younger companiesare demonstrating that modern software, AI, and new business models can unlock performance improvements that established players struggle to match. Investor playbooks are already evolving, and that shift will likely accelerate in the year ahead.

At the same time, the total addressable market ceiling has been lifted. By moving beyond narrow software categories and into the physical economy, startups are targeting markets measured not in billions, but in trillions. As a result, we should expect more $100 billion companies to be built in this cycle. It’s no longer just about building better software.

It’s about rebuilding foundational sectors of the global economy.

Advertisement

Source link

Continue Reading

Tech

Garmin Partners With Giant Bicycles India to Bring Cycling Tech to Retail Stores

Published

on

To strengthen its appeal to fitness enthusiasts, Garmin has announced a new retail partnership with Giant Bicycles India, bringing its cycling-focused wearables and performance tech to select Giant stores across the country. As part of the first phase, Garmin products are now available at select Giant retail stores in Mumbai, Pune, and Jaipur. Customers visiting these outlets can explore Garmin’s cycling ecosystem alongside Giant’s bicycle range.

Garmin’s Cycling Ecosystem Comes to Giant Stores

Garmin’s portfolio spans a wide range of cycling-focused products, including advanced bike computers with detailed ride tracking and mapping, TACX indoor trainers, GPS smartwatches, power meters, and power pedals. The company also offers smart bike lights with rear-view radar alerts, performance trackers, and 4K ride-recording cameras designed for documenting rides.

In particular, the TACX indoor trainers allow cyclists to replicate real-world terrain indoors using their existing Giant bicycles. Meanwhile, Garmin’s cycling computers and performance tools provide detailed insights into speed, cadence, heart rate, endurance, and navigation. All these features are increasingly sought after by serious riders and training-focused enthusiasts.

Garmin TACX indoor trainers

Deepak Raina, Director at AMIT GPS & Navigation LLP., said, “Data has become central to how cyclists understand and improve their performance. Access to accurate ride metrics, training insights, and navigation information helps riders make better decisions and measure real progress over time without compromising on the safety on roads by increasing situational awareness. Through our partnership with Giant Bicycles India, we are bringing reliable, data-driven technology closer to cyclists at the point where they invest in their riding journey.”

Adding to this, Varun Bagadiya, Director at Giant Bicycles India, said, “We are pleased to partner with Garmin, a brand synonymous with precision and performance. Offering Garmin’s full product portfolio at our stores allows us to deliver a more comprehensive experience to cyclists, supporting them not just with world-class bicycles but also with advanced performance technology.”

Advertisement

Source link

Continue Reading

Tech

Anthropic releases Sonnet 4.6 | TechCrunch

Published

on

Anthropic has released a new version of its mid-size Sonnet model, keeping pace with the company’s four-month update cycle. In a post announcing the new model, Anthropic emphasized improvements in coding, instruction-following, and computer use.

Sonnet 4.6 will be the default model for Free and Pro plan users.

The beta release of Sonnet 4.6 will include a context window of 1 million tokens, twice the size of the largest window previously available for Sonnet. Anthropic described the new context window as “enough to hold entire codebases, lengthy contracts, or dozens of research papers in a single request.”

The release comes just two weeks after the launch of Opus 4.6, with an updated Haiku model likely to follow in the coming weeks.

Advertisement

The launch comes with a new set of record benchmark scores, including OS World for computer use and SWE-Bench for software engineering. But perhaps the most impressive is its 60.4% score on ARC-AGI-2, meant to measure skills specific to human intelligence. The score puts Sonnet 4.6 above most comparable models, although it still trails models like Opus 4.6, Gemini 3 Deep Think, and one refined version of GPT 5.2.

Source link

Continue Reading

Tech

SurrealDB 3.0 wants to replace your five-database RAG stack with one

Published

on

Building retrieval-augmented generation (RAG) systems for AI agents often involves using multiple layers and technologies for structured data, vectors and graph information. In recent months it has also become increasingly clear that agentic AI systems need memory, sometimes referred to as contextual memory, to operate effectively.

The complexity and synchronization of having different data layers to enable context can lead to performance and accuracy issues. It's a challenge that SurrealDB is looking to solve.

SurrealDB on Tuesday launched version 3.0 of its namesake database alongside a $23 million Series A extension, bringing total funding to $44 million. The company had taken a different architectural approach than relational databases like PostgreSQL, native vector databases like Pinecone or a graph database like Neo4j. The OpenAI engineering team recently detailed how it scaled Postgres to 800 million users using read replicas — an approach that works for read-heavy workloads. SurrealDB takes a different approach: Store agent memory, business logic, and multi-modal data directly inside the database. Instead of synchronizing across multiple systems, vector search, graph traversal, and relational queries all run transactionally in a single Rust-native engine that maintains consistency.

"People are running DuckDB, Postgres, Snowflake, Neo4j, Quadrant or Pinecone all together, and then they're wondering why they can't get good accuracy in their agents," CEO and co-founder Tobie Morgan Hitchcock told VentureBeat. "It's  because they're having to send five different queries to five different databases which only have the knowledge or the context that they deal with."

Advertisement

The architecture has resonated with developers, with 2.3 million downloads and 31,000 GitHub stars to date for the database. Existing deployments span edge devices in cars and defense systems, product recommendation engines for major New York retailers, and Android ad serving technologies, according to Hitchcock.

Agentic AI memory baked into the database

SurrealDB stores agent memory as graph relationships and semantic metadata directly in the database, not in application code or external caching layers. 

The Surrealism plugin system in SurrealDB 3.0 lets developers define how agents build and query this memory; the logic runs inside the database with transactional guarantees rather than in middleware.

Here's what that means in practice: When an agent interacts with data, it creates context graphs that link entities, decisions and domain knowledge as database records. These relationships are queryable through the same SurrealQL interface used for vector search and structured data. An agent asking about a customer issue can traverse graph connections to related past incidents, pull vector embeddings of similar cases, and join with structured customer data — all in one transactional query.

Advertisement

"People don't want to store just the latest data anymore," Hitchcock said. "They want to store all that data. They want to analyze and have the AI understand and run through all the data of an organization over the last year or two, because that informs their model, their AI agent about context, about history, and that can therefore deliver better results."

How SurrealDB's architecture differs from traditional RAG stacks

Traditional RAG systems query databases based on data types. Developers write separate queries for vector similarity search, graph traversal, and relational joins, then merge results in application code. This creates synchronization delays as queries round-trip between systems.

In contrast, Hitchcock explained that SurrealDB stores data as binary-encoded documents with graph relationships embedded directly alongside them. A single query through SurrealQL can traverse graph relationships, perform vector similarity searches, and join structured records without leaving the database.

That architecture also affects how consistency works at scale: Every node maintains transactional consistency, even at 50+ node scale, Hitchcock said. When an agent writes new context to node A, a query on node B immediately sees that update. No caching, no read replicas.

Advertisement

"A lot of our use cases, a lot of our deployments are where data is constantly updated and the relationships, the context, the semantic understanding, or the graph connections between that data needs to be constantly refreshed," he said. "So no caching. There's no read replicas. In SurrealDB, every single thing is transactional."

What this means for enterprise IT

"It's important to say SurrealDB is not the best database for every task. I'd love to say we are, but it's not. And you can't be," Hitchcock said. "If you only need analysis over petabytes of data and you're never really updating that data, then you're going to be best going with object storage or a columnar database. If you're just dealing with vector search, then you can go with a vector database like Quadrant or Pinecone, and that's going to suffice."

The inflection point comes when you need multiple data types together. The practical benefit shows up in development timelines. What used to take months to build with multi-database orchestration can now launch in days, Hitchcock said.

Source link

Advertisement
Continue Reading

Tech

Zillow teams up with ‘World of Warcraft’ to exhibit virtual homes inside popular game

Published

on

(Zillow press image)

Seattle-based real estate company Zillow has partnered with the company behind the long-running online game World of Warcraft in order to showcase players’ creativity by exhibiting their virtual homes.

A new microsite, “Zillow for Warcraft,” allows users to explore an assortment of designs for in-game housing, both those made by players and by members of WoW’s development team. Some of these homes will be presented using the same methods as Zillow’s real-world virtual tours, such as with 3D modeling and SkyTour visuals, so users can poke around a fantasy kitchen just as if it was real.

(Zillow press image)

World of Warcraft, published and developed by Microsoft subsidiary Blizzard Entertainment, recently unveiled player housing as a feature of its newest paid expansion, Midnight. Owners of the expansion can opt to take their characters to an in-game island, where they’re given a plot of land and a small house to customize and decorate however they wish.

(Midnight also involves a life-and-death struggle against a shadow-wielding antagonist who plans to seize and corrupt the very heart of the game’s world, but in much of its marketing so far, Blizzard has chosen to emphasize the new home-building feature. Go figure.)

“Player housing is a milestone moment for the World of Warcraft community, and we wanted to honor it in a way that felt authentic and unexpected,” Beverly W. Jackson, vice president of brand and product marketing at Zillow, said in a press release.

Jackson continued: “Zillow exists at the center of how people think and talk about home, and gaming has become another powerful expression of that. This collaboration brings two worlds together, celebrating home as both a place to belong and a place to escape into something that feels honest and personal.”

Advertisement

As part of the Zillow collaboration, WoW players will be able to unlock a number of decorative items for their in-game homes, such as a unique doormat.

Notably, Zillow for Warcraft features no transactions at all. You will not be able to exchange real or virtual money for anything seen on the website. It’s simply a free virtual tour of what players have been able to accomplish with WoW’s new housing system.

The deal with Zillow is one of several bizarre new brand deals that Blizzard has made for Midnight, including a collaboration with Pinterest that can unlock an in-game camera. While the primary driver of World of Warcraft is still widespread armed conflict in an increasingly vast fantasy universe, the introduction of housing seems to have spurred Blizzard into also pitching it to a new audience as a cozy house-building simulator. It’s simply that to get new furniture, you occasionally may have to go to another dimension and beat it out of a dragon.

World of Warcraft celebrated its 21st anniversary in November. Midnight, its 11th expansion, is planned to go live on March 2.

Advertisement

Source link

Continue Reading

Tech

The Complicated Legacy Of Mind Controlled Toys

Published

on

Imagine a line of affordable toys controlled by the player’s brainwaves. By interpreting biosignals picked up by the dry electroencephalogram (EEG) electrodes in an included headset, the game could infer the wearer’s level of concentration, through which it would be possible to move physical objects or interact with virtual characters. You might naturally assume such devices would be on the cutting-edge of modern technology, perhaps even a spin-off from one of the startups currently investigating brain-computer interfaces (BCIs).

But the toys in question weren’t the talk of 2025’s Consumer Electronics Show, nor 2024, or even 2020. In actual fact, the earliest model is now nearly as old as the original iPhone. Such is the fascinating story of a line of high-tech toys based on the neural sensor technology developed by a company called Neurosky, the first of which was released all the way back in 2009.

Yet despite considerable interest leading up to their release — fueled at least in part by the fact that one of the models featured Star Wars branding and gave players the illusion of Force powers — the devices failed to make any lasting impact, and have today largely fallen into obscurity. The last toy based on Neurosky’s technology was released in 2015, and disappeared from the market only a few years later.

I had all but forgotten about them myself, until I recently came across a complete Mattel Mindflex at a thrift store for $8.99. It seemed a perfect opportunity to not only examine the nearly 20 year old toy, but to take a look at the origins of the product, and find out what ultimately became of Neurosky’s EEG technology. Was the concept simply ahead of its time? In an era when most people still had flip phones, perhaps consumers simply weren’t ready for this type of BCI. Or was the real problem that the technology simply didn’t work as advertised?

Shall We Play a Game?

NeuroSky was founded in 1999 to explore commercial applications for BCIs, and as such, they identified two key areas where they thought they could improve upon hardware that was already on the market: cost, and ease of use.

Advertisement

Cost is an easy enough metric to understand and optimize for in this context — if you’re trying to incorporate your technology into games and consumer gadgets, cheaper is better. To reduce costs, their hardware wasn’t as sensitive or as capable as what was available in the medical and research fields, but that wasn’t necessarily a problem for the sort of applications they had in mind.

Of course, it doesn’t matter how cheap you make the hardware if manufacturers can’t figure out how to integrate it into their products, or users can’t make any sense of the information. The average person certainly wouldn’t be able to make heads or tails of the raw data coming from electroencephalography or electromyography sensors, and the engineers looking to graft BCI features into their consumer products weren’t likely to do much better.

NeuroSky engineer Horance Ko demonstrates a prototype headset in 2007.

To address this, NeuroSky’s technology presented the user with simple 0 to 100 values for more easily conceptualized parameters like concentration and anxiety based on their alpha and beta brainwaves. This made integration into consumer devices far simpler, albeit at the expense of accuracy and flexibility. The user could easily see when values were going up and down, but whether or not those values actually corresponded with a given mental state was entirely up to the interpretation being done inside the hardware.

These values were easy to work with, and with some practice, NeuroSky claimed the user could manipulate them by simply focusing their thoughts. So in theory, a home automation system could watch one of these mental parameters and switch on the lights when the value hit a certain threshold. But the NeuroSky BCI could never actually sense what the user was thinking — at best, it could potentially determine how hard an individual was concentrating on a specific thought. Although in the end, even that was debatable.

The Force Awakens

After a few attempted partnerships that never went anywhere, NeuroSky finally got Mattel interested in 2009. The result was the Mindflex, which tasked the player with maneuvering a floating ball though different openings. The height of the ball, controlled by the speed of the blower motor in the base of the unit, was controlled by the output of the NeuroSky headset. Trying to get two actionable data points out of the hardware was asking a bit much, so moving the ball left and right must be done by hand with a knob.

Advertisement

But while the Mindflex was first, the better known application for NeuroSky’s hardware in the entertainment space is certainly the Star Wars Jedi Force Trainer released by Uncle Milton a few months later. Fundimentally, the game worked the same way as the Mindflex, with the user again tasked with controlling the speed of a blower motor that would raise and lower a ball.

But this time, the obstacles were gone, as was the need for a physical control. It was a simpler game in all respects. Even the ball was constrained in a clear plastic tube, rather than being held in place by the Coandă effect as in the Mindflex. In theory, this made for a less distracting experience, allowing the user to more fully focus on trying to control the height of the ball with their mental state.

But the real hook, of course, was Star Wars. Uncle Milton cleverly wrapped the whole experience around the lore from the films, putting the player in the role of a young Jedi Padawan that’s using the Force Trainer to develop their telekinetic abilities. As the player attempted to accurately control the movement of the ball, voice clips of Yoda would play to encourage them to concentrate harder and focus their minds on the task at hand. Even the ball itself was modeled after the floating “Training Remote” that Luke uses to practice his lightsaber skills in the original film.

Advertisement

The Force Trainer enjoyed enough commercial success that Uncle Milton produced the Force Trainer II in 2015. This version used a newer NeuroSky headset which featured Bluetooth capability, and paired it with an application running on a user-supplied Android or Apple tablet. The tablet was inserted into a base unit which was able to display “holograms” using the classic Pepper’s Ghost illusion. Rather than simply moving a ball up and down, the young Jedi in training would have to focus their thoughts to virtually lift a 3D model of an X-Wing out of the muck or knock over groups of battle droids.

Unfortunately, Force Trainer II didn’t end up being as successful as its predecessor, and was discontinued a few years later. Even though the core technology was the same as in 2009, the reviews I can still find online for this version of the game are scathing. It seems like most of the technical problems came from the fact that users had to connect the headset to their own device, which introduced all manner of compatibility issues. Others claimed that the game doesn’t actually read the player’s mental state at all, and that the challenges can be beaten even if you don’t wear the headset.

Headset Hacking

The headsets for both the Mindflex and the original Force Trainer use the same core hardware, and NeuroSky even released their own “developer version” of the headset not long after the games hit the market which could connect to the computer and offered a free SDK.

Advertisement

Over the years, there have been hacks to use the cheaper Mindflex and Force Trainer headsets in place of NeuroSky’s developer version, some of which have graced these very pages. But somehow we missed what seems to be the best source of information: How to Hack Toy EEGs. This page not only features a teardown of the Mindflex headset, but shows how it can be interfaced with the Arduino so brainwave data can be read and processed on the computer.

I haven’t gone too far down this particular rabbit hole, but I did connect the headset up to my trusty Bus Pirate 5 and could indeed see it spewing out serial data. Paired with a modern wireless microcontroller, the Mindflex could still be an interesting device for BCI experimentation all these years later. Though if you can pick up the Bluetooth Force Trainer II headset for cheap on eBay, it sounds like it would save you the trouble of having to hack it yourself.

My Mind to Your Mind

So the big question: does the Mindflex, and by extension NeuroSky’s 2009-era BCI technology, actually work?

Advertisement

Before writing this article, I spent the better part of an hour wearing the Mindflex headset and trying to control the LEDs on the front of the device that are supposed to indicate your focus level. I can confidently say that it’s doing something, but it’s hard to say what. I found that getting the focus indicator to drop down to zero was relatively easy (story of my life) and nearly 100% repeatable, but getting it to go in the other direction was not as consistent. Sometimes I could make the top LEDs blink on and off several times in a row, but then seconds later I would lose it and struggle to light up even half of them.

Some critics have said that the NeuroSky is really just detecting muscle movement in the face — picking up not the wearer’s focus level so much as a twitch of the eye or a furrowed brow which makes it seem like the device is responding to mental effort. For what it’s worth, the manual specifically says to try and keep your face as still as possible, and I couldn’t seem to influence the focus indicator by blinking or making different facial expressions. Although if it actually was just detecting the movement of facial muscles, that would still be a neat trick that offered plenty of potential applications.

I also think that a lot of the bad experiences people have reported with the technology is probably rooted in their own unrealistic expectations. If you tell a child that a toy can read their mind and that they can move an object just by thinking about it, they’re going to take that literally. So when they put on the headset and the game doesn’t respond to their mental image of the ball moving or the LEDs lighting up, it’s only natural they would get frustrated.

So what about the claims that the Force Trainer II could be played without even wearing the headset? If I had to guess, I would say that if there’s any fakery going on, it’s in the game itself and not the actual NeuroSky hardware. Perhaps somebody was worried the experience would be too frustrating for kids, and goosed the numbers so the game could be beaten no matter what.

Advertisement

As for NeuroSky, they’re still making BCI headsets and offer a free SDK for them. You can buy their MindWave Mobile 2 on Amazon right now for $130, though the reviews aren’t exactly stellar. They continue to offer a single chip EEG sensor (datasheet, PDF) that you can integrate into your projects as well, the daughterboard for which looks remarkably similar to what’s in the Mindflex headset. Despite the shaky response to the devices that have hit the market so far, it seems that NeuroSky hasn’t given up on the dream of bringing affordable brain-computer interfaces to the masses.

Source link

Advertisement
Continue Reading

Tech

5 Classic Cars From The ’60s That Nobody Talks About Today

Published

on





Many remember the 1960s only because of the fast muscle cars, but the decade was easily one of the most significant for the automotive world more broadly. It was at this time that we got the Corvette Stingray, Jaguar E-Type, Ford Mustang, and Ferrari 250 GTO, among many other legends. And that’s precisely the problem; those cars were so good that today it’s impossible for them to not dominate auction headlines and take up all available oxygen in conversations about the ’60s auto world. 

Those cars’ fame is deserved, as they are remarkable machines; but they overshadow other remarkable vehicles that merely lacked the right combination of things like marketing, racing success, or cultural timing. Every case (on this list, that is) has a story just as interesting as its contemporaries. 

One car combined European styling with the great hulking V8 of a muscle car. Another was simply too odd to get mainstream acceptance despite being an engineering miracle. Still another was a bold experiment that failed commercially, but succeeded artistically. Tragically, history has largely forgotten these cars, so we’re going to give them their due.

Advertisement

Facel Vega HK500

The French are no strangers to big, luxurious, grand touring vehicles, and the Facel Vega HK500 was one of the best ever made. The company was originally just called Facel, and it used to make a model called the Vega, primarily for affluent buyers. However, the Vega became so popular and synonymous with the brand, that the name was changed to Facel-Vega later on. It would have been like Ford renaming itself Ford F-150; a bit of an odd move, but it was the 1960s — a lot of much weirder things were happening. 

The company made fewer than 500 HK500 models, and they sold for a whopping $9,795 when new. The engine in this gorgeous car was actually American; having been made for Facel by the Chrysler group. The “typhoon” was a V8 unit that made 350 hp or 385 hp, depending on how it was set up. Models with the three-speed auto gearbox made 350 hp and had a single quad-barrel carb; while the four-speed manual made 385 hp and had dual quad-barrel carburetors, at least that was the claim. 

Most sources however, quote that the car offered 360 hp, and 460 lb-ft of torque. A 1960 HK500 failed to sell at an RM Sotheby’s auction in Paris in 2023 — one of the most premier automotive auctions in the world — so that should tell you everything about how criminally under-appreciated the HK500 is.

Advertisement

Sunbeam Tiger

Everything about the Sunbeam Tiger screams 1960s, from the long flowing lines and chromed windscreen border, to the tubular bumper running across the grill opening. The Tiger was essentially just a British-made Sunbeam Alpine ,with a Ford 260 cubic-inch V8 stuffed into it, by Caroll Shelby no less. In terms of engine choices, the ones from the first half of the 1960s had the 260 cubic inch from Ford making 164 hp and 258 lb-ft of torque, while the ones from 1966 had an upgraded 289 cubic incher making 210 hp. 

For the people keeping track of the numbers, this means the Mk I Sunbeam Tiger had about 200 pounds more heft than the standard Alpine. However, it also had about double the power, which balanced things out nicely. Further down the line, there was also a Sunbeam Tiger with a Ford big block shoved under the bonnet, though this didn’t see mass production. 

Advertisement

The Sunbeam Tigers with the 260 had a 0-60 mph time of 8 seconds, while the ones with the bigger 289 fared slightly worse at 8.9 seconds, and cost $3,500 when new — which was actually quite okay for a luxury sports cabriolet of the time. Additionally, one of these ended up selling for the sum of $43,680; which shows that even some of these obscure 1960s classics can have high resale value.

Advertisement

Iso Grifo GL

At first glance, the Iso Grifo GL looks like a Ferrari 250 2+2 with the front end of a Dodge something-or-other. In other words, it looks positively brilliant. Its sloping rear end, mega-long bonnet, and set-back driving position represents the epitome of 1960s styling. And that Ferrari-reminiscent design is far from coincidence. The creator of the Iso Grifo GL was actually a former Ferrari employee. The name GL stands for “Gran Lusso,” which literally translates to “great luxury” in English. 

Iso is better known for its famous Isetta bubble car that people either love or hate. Later, it made another coupe called the Rivolta, followed by the Grifo GL in the years to come after that. When launched, the Grifo GL looked very different from anything Iso had ever attempted, and it was quite eye-catching to say the least. 

Power came from a Chevy small-block 327 cubic inch engine (the same one as used in the Rivo that we mentioned), at least at first. In initial models, the 327 made 300 hp, though it was later up-rated to produce a better 340-350 hp down the line. Just a short while after that, the engineers decided to shove in a Chevy 427 cubic inch, which required some modifications to the structure of the car; but these were well worth it, as power output now stood at 435 hp.

Advertisement

Gordon Keeble

The strange thing about the Gordon Keeble — yeah, that’s its actual name — was that there was nothing strange about it. The small two-door car for working families came in an array of colors, and had funky slightly-off-angle headlamps that sat above the indicator lights on the front. The only tell that gave anything of the Keeble’s defining feature away was the rather subtle air scoop on the hood of the car. This scoop fed fresh air to a 4.7-liter V8 engine from GM that produced 300 hp along with 360 lb-ft of torque, all of which was available from a respectable 5,000 RPM. 

Yes, Italian-styled vehicle using U.S. power for its engine; just like the Iso Grifo GL and the Sunbeam Tiger. The Keeble had an uber-impressive (for the 1960s anyway) standing to 60 mph time of 6 seconds; and ran the quarter mile in 15.3 seconds, at the end of which the speedo would read somewhere near 98 mph. Impressiveness aside, all those numbers have a touch of irony in them, because the badge on the hood of the car was a tortoise; which is not exactly an animal known for its speediness. Perhaps an eagle or condor would have been a better bet, but we highly doubt this trivial detail matters to the owners of the 99 Gordon Keeble units that were made.

Advertisement

Maserati Sebring

While it may currently be one of the worst-depreciating car brands on the market; Maserati’s always known how to make a good looking car. Back in 1962, the company came out with one of its first road cars. The Sebring shares its name with the famous American racetrack down South, and like so many other cars of this decade, it also has a lookalike — though it’s tough to say whether it inspired the Lamborghini 400GT or was inspired by it.

The engine in the Maserati Sebring was a 3.5-liter six-banger inline engine that ended up making the impressive sum of 235 hp and 261 lb-ft of torque. Notable features on the initial Mk I (or series one, as they are called) Sebrings were the inclusion of quad-disk brakes and air conditioning. It’s also worth noting that these were not the only engines offered on the Sebring, as a mildly higher displacement 3.7-liter was added down the line, with the option of a four-liter option too.

Reaching 60 mph took about 8.5 seconds, and it could run the quarter-mile in a stellar 15.6 seconds. The Sebring name was adopted because a few years prior,  a driver for Maserati had won an important race for the brand at that very same track. When they come up for auction (as only a few hundred were ever made), these cars fetch astronomical sums, often in the several-hundred-thousand dollar range.

Advertisement



Source link

Advertisement
Continue Reading

Trending

Copyright © 2025