Connect with us
DAPA Banner

Tech

Building a Dependency-Free GPT on a Custom OS

Published

on

The construction of a large language model (LLM) depends on many things: banks of GPUs, vast reams of training data, massive amounts of power, and matrix manipulation libraries like Numpy. For models with lower requirements though, it’s possible to do away with all of that, including the software dependencies. As someone who’d already built a full operating system as a C learning project, [Ethan Zhang] was no stranger to intimidating projects, and as an exercise in minimalism, he decided to build a generative pre-trained transformer (GPT) model in the kernel space of his operating system.

As with a number of other small demonstration LLMs, this was inspired by [Andrej Karpathy]’s MicroGPT, specifically by its lack of external dependencies. The first step was to strip away every unnecessary element from MooseOS, the operating system [Ethan] had previously written, including the GUI, most drivers, and the filesystem. All that’s left is the kernel, and KernelGPT runs on this. To get around the lack of a filesystem, the training data was converted into a header to keep it in memory — at only 32,000 words, this was no problem. Like the original MicroGPT, this is trained on a list of names, and predicts new names. Due to some hardware issues, [Ethan] hasn’t yet been able to test this on a physical computer, but it does work in QEMU.

It’s quite impressive to see such a complex piece of software written solely in C, running directly on hardware; for a project which takes the same starting point and goes in the opposite direction, check out this browser-based implementation of MicroGPT. For more on the math behind GPTs, check out this visualization.

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Robots beat human records at Beijing half-marathon

Published

on

The winning runner at a Beijing half-marathon for humanoid robots finished the race today in 50 minutes and 26 seconds — significantly faster than the human world record of 57 minutes recently set by Jacob Kiplimo.

Comparing human and robot running times may seem unfair; one social media user observed, “my car can outrun a cheetah too.” Still, the winning time is a massive improvement over last year’s race, when the fastest robot finished in two hours and 40 minutes. (Back then, I scoffed that this “would not be an impressive time for a human.”)

The Associated Press reports that this year’s winner was built by Chinese smartphone maker Honor. It seems the winning robot wasn’t actually the fastest, as a different Honor robot finished in 48 minutes and 19 seconds. But that one was remote controlled — the 50:26 robot was autonomous and won due to weighted scoring.

About 40% of participating robots competed autonomously, while the remaining 60% were remote controlled, according to Beijing’s E-Town tech hub. Not all of them did as well as Honor’s robots, with one robot falling at the starting line and another hitting a barrier.

Advertisement

Source link

Continue Reading

Tech

‘Euphoria’ Season 3 Release Schedule: When Does Episode 2 Come Out?

Published

on

The HBO drama Euphoria is premiering new episodes. It may be hard to believe that the previous season wrapped up in 2022. On my TikTok “For You” page, I still see 4-year-old clips on the regular. 

Season 3 takes place five years after season 2 (see our finale recap here), well after high school. The new season once again stars Zendaya, Hunter Schafer, Jacob Elordi, Sydney Sweeney, Alexa Demie, Maude Apatow, Colman Domingo and Eric Dane. It adds new guest stars such as Sharon Stone, Rosalía, Danielle Deadwyler, Natasha Lyonne and Trisha Paytas. According to an official synopsis, season 3 sees “a group of childhood friends wrestle with the virtue of faith, the possibility of redemption and the problem of evil.”

Advertisement

While it’s swapped from HBO Max to Max and back to HBO Max again in the time it’s taken for Euphoria to return to TV, you’ll be able to tune into the HBO streaming service for new episodes each week. Here’s a release schedule for Euphoria season 3.

When to watch Euphoria season 3 on HBO Max

In the US? You can stream episode 2 of Euphoria season 3 on HBO Max on Sunday, April 19, at 9 p.m. ET (6 p.m. PT). It’ll also air on HBO at 9 p.m. ET and PT. Subsequent installments will debut on Sundays through May 31.

  • Episode 2, America My Dream: April 19
  • Episode 3, The Ballad of Paladin: April 26
  • Episode 4, Kitty Likes to Dance: May 3
  • Episode 5, This Little Piggy: May 10
  • Episode 6, Stand Still and See: May 17
  • Episode 7, Rain or Shine: May 24
  • Episode 8, In God We Trust: May 31

HBO Max last increased its plan prices in October, raising the ad-supported tier to $11 per month, the ad-free Standard tier to $18.50 per month and the ad-free Premium tier to $23 per month.

Advertisement

Warner Bros. Discovery

You might be able to save money by paying upfront for 12 months of HBO Max, which costs less than paying month-by-month for a year. In addition to HBO Max’s standalone plans, you can bundle it with Disney Plus and Hulu, either with ads for all three services or without.

Source link

Advertisement
Continue Reading

Tech

Google in talks with Marvell Technology to build new AI inference chips alongside Broadcom TPU programme

Published

on

Summary: Google is in talks with Marvell Technology to develop two new AI chips – a memory processing unit and an inference-optimised TPU – adding a third design partner alongside Broadcom and MediaTek in its custom silicon supply chain. The discussions, which have not yet produced a signed contract, came days after Broadcom locked in a through-2031 TPU agreement and reflect Google’s shift toward inference as the dominant compute cost, as the custom ASIC market is projected to grow 45% in 2026 and reach $118 billion by 2033.

Google is in talks with Marvell Technology to develop two new chips for running AI models, according to The Information. One is a memory processing unit designed to work alongside Google’s existing Tensor Processing Units. The other is a new TPU built specifically for inference, the phase of AI where models serve users rather than learn from data. Marvell would act in a design-services role, similar to MediaTek’s involvement on Google’s latest Ironwood TPU. The discussions have not yet produced a signed contract.

The talks came days after Broadcom, Google’s primary custom chip partner, announced a long-term agreement to design and supply TPUs and networking components through 2031. The timing suggests Google is not replacing Broadcom but adding a third design partner to a supply chain that already includes Broadcom for high-performance chip variants, MediaTek for cost-optimised “e” variants at 20 to 30% lower cost, and TSMC for fabrication. The strategy is diversification, not substitution.

Why inference matters now

Google’s seventh-generation TPU, Ironwood, debuted this month as what the company calls “the first Google TPU for the age of inference.” It delivers ten times the peak performance of the TPU v5p and scales to 9,216 liquid-cooled chips in a superpod spanning roughly 10 megawatts, producing 42.5 FP8 exaflops. Google plans to build millions of Ironwood units this year. The Marvell-designed chips would supplement rather than replace Ironwood, potentially targeting different workload profiles or cost points for the growing share of Google’s compute that goes to serving AI models rather than training them.

Advertisement

The shift from training to inference as the primary demand driver is reshaping the chip market. Training a frontier model is a one-time event that requires enormous compute for weeks or months. Inference runs continuously, serving every query from every user, and its costs scale with demand rather than capability. As AI products reach hundreds of millions of users, inference becomes the dominant expense, and purpose-built inference silicon becomes a competitive advantage that general-purpose GPUs cannot match on cost or efficiency.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

The backstory

The Google-Marvell relationship has a longer history than this week’s report suggests. The Information reported in 2023 that Google had been working since 2022 on a chip codenamed “Granite Redux” that would use Marvell instead of Broadcom, with Google expecting to save billions of dollars annually. At the time, Google’s spokesperson called Broadcom “an excellent partner” and said the company was “productively engaged with Broadcom and multiple other suppliers for the long term.

Advertisement

What changed between 2023 and now is that Google appears to have abandoned the idea of dropping Broadcom entirely. The through-2031 agreement locked in that relationship. Instead, Google is building a multi-supplier architecture in which Broadcom, MediaTek, and potentially Marvell each handle different parts of the TPU programme, competing on specific segments rather than for the entire contract. The approach mirrors how automotive companies manage component suppliers: no single vendor gets enough leverage to dictate terms.

What Marvell brings

Marvell’s data centre revenue reached a record $6.1 billion in its fiscal year ending February 2026, with total revenue of $8.2 billion, up 42% year over year. The company runs a custom silicon business with a $1.5 billion annual run rate across 18 cloud-provider design wins, building chips for Amazon (Trainium processors), Microsoft (Maia AI accelerator), and Meta (a new data processing unit), in addition to its existing work with Google on the Axion ARM CPU.

Nvidia invested $2 billion in Marvell at the end of March, partnering through NVLink Fusion to integrate Marvell’s custom chips and networking with Nvidia’s interconnect fabric. The deal positions Marvell at the intersection of both the GPU and ASIC ecosystems. In December 2025, Marvell acquired Celestial AI for up to $5.5 billion, gaining photonic interconnect technology that CEO Matt Murphy said would deliver “the industry’s most complete connectivity platform for AI and cloud customers.” Murphy is targeting 20% market share in custom AI chips and expects roughly 30% year-over-year revenue growth in fiscal 2027.

Marvell’s stock has rallied approximately 50% year to date, with a 30% gain in April alone following the Nvidia partnership and the Google talks. Barclays analyst Tom O’Malley upgraded the stock to overweight and raised his price target from $105 to $150.

Advertisement

Broadcom’s position

The Marvell talks do not appear to have weakened Broadcom’s position. Broadcom commands more than 70% market share in custom AI accelerators. Its AI revenue hit $8.4 billion in its most recent quarter, up 106% year over year, with guidance of $10.7 billion for the following quarter. The company is targeting $100 billion in AI chip revenue by 2027. Broadcom’s shares rose more than 6% on the day it announced the Google extension, and Mizuho analysts estimated the company would record $21 billion in AI revenue attributable to its Google and Anthropic relationships in 2026, rising to $42 billion in 2027. Anthropic will access approximately 3.5 gigawatts of next-generation TPU-based compute starting in 2027.

The broader ASIC market is growing faster than the GPU market. TrendForce projects custom chip sales will increase 45% in 2026, compared with 16% growth in GPU shipments. Counterpoint Research projects Broadcom will hold roughly 60% of the custom AI accelerator market by 2027, with Marvell at approximately 25%. The market itself is expected to reach $118 billion by 2033.

What this means for Google

Google’s chip strategy now involves four partners (Broadcom, MediaTek, Marvell, and TSMC), its own in-house design team, and a product line that spans training, inference, and general-purpose cloud compute. The complexity is deliberate. Every hyperscaler that depends on a single chip supplier, whether Nvidia or anyone else, faces pricing risk, supply risk, and the strategic vulnerability of building a business on someone else’s silicon.

The inference focus of the Marvell discussions reflects a shift in where the money goes. Training Nvidia’s latest chips remain dominant in training workloads, but inference is where the volume is, and volume is where custom silicon’s cost advantages compound. Google serves billions of AI-augmented search queries, Gemini conversations, and Cloud AI API calls every day. Shaving even a small percentage off the cost per inference across that scale translates into billions of dollars annually, which is precisely what the 2023 “Granite Redux” discussions were about.

Advertisement

The talks with Marvell are not yet a deal, and chip development timelines mean any resulting product is likely years from production. But the direction is clear. Google is building a chip supply chain designed to support the most demanding AI inference workloads in the world, and it intends to have more than one partner capable of building the silicon that runs them. For Marvell, a Google inference TPU contract would validate its position as the second-most important custom AI chip designer in the world. For Google, it would mean one more supplier in a market where no company can afford to depend on just one.

Source link

Advertisement
Continue Reading

Tech

Francis Bacon and the Scientific Method

Published

on

In 1627, a year after the death of the philosopher and statesman Francis Bacon, a short, evocative tale of his was published. The New Atlantis describes how a ship blown off course arrives at an unknown island called Bensalem. At its heart stands Salomon’s House, an institution devoted to “the knowledge of causes, and secret motions of things” and to “the effecting of all things possible.” The novel captured Bacon’s vision of a science built on skepticism and empiricism and his belief that understanding and creating were one and the same pursuit.

No mere scholar’s study filled with curiosities, Salomon’s House had deep-sunk caves for refrigeration, towering structures for astronomy, sound-houses for acoustics, engine-houses, and optical perspective-houses. Its inhabitants bore titles that still sound futuristic: Merchants of Light, Pioneers, Compilers, and Interpreters of Nature.

Engraved title page of \u201cThe Advancement and Proficience of Learning\u201d with ship and globes Engraved title page of The Advancement and Proficience of LearningPublic Domain

Bacon didn’t conjure his story from nothing. Engineers he likely had met or observed firsthand gave him reason to believe such an institution could actually exist. Two in particular stand out: the Dutch engineer Cornelis Drebbel and the French engineer Salomon de Caus. Their bold creations suggested that disciplined making and testing could transform what we know.

Engineers show the way

Drebbel came to England around 1604 at the invitation of King James I. His audacious inventions quickly drew notice. By the early 1620s, he unveiled a contraption that bordered on fantasy: a boat that could dive beneath the Thames and resurface hours later, ferrying passengers from Westminster to Greenwich. Contemporary descriptions mention tubes reaching the surface to supply air, while later accounts claim Drebbel had found chemical means to replenish it. He refined the underwater craft through iterative builds, each informed by test dives and adjustments. His other creations included a perpetual-motion device driven by heat and air-pressure changes, a mercury regulator for egg incubation, and advanced microscopes.

Advertisement

De Caus, who arrived in England around 1611, created ingenious fountains that transformed royal gardens into animated spectacles. Visitors marveled as statues moved and birds sang in water-driven automatons, while hidden pipes and pumps powered elaborate fountains and mythic scenes. In 1615, de Caus published The Reasons for Moving Forces, an illustrated manual on water- and air-driven devices like spouts, hydraulic organs, and mechanical figures. What set him apart was scale and spectacle: He pressed ancient physical principles into the service of courtly theater.

Drebbel’s airtight submersibles and methodical trials echo in the motion studies and environmental chambers of Salomon’s House. De Caus’s melodic fountains and hidden mechanisms parallel its acoustic trials and optical illusions. From such hands-on workshops, Bacon drew the lesson that trustworthy knowledge comes from working within material constraints, through gritty making and testing. On the island of Bensalem, he imagines an entire society organized around it.

Beyond inspiring Bacon’s fiction, figures like Drebbel and de Caus honed his emerging philosophy. In 1620, Bacon published Novum Organum, which critiqued traditional philosophical methods and advocated a fresh way to investigate nature. He pointed to printing, gunpowder, and the compass as practical inventions that had transformed the world far more than abstract debates ever could. Nature reveals its secrets, Bacon argued, when probed through ingenious tools and stringent tests. Novum Organum laid out the rationale, while New Atlantis gave it a vivid setting.

A final legacy to science

Engraved title page of Bacon\u2019s *Novum Organum* with ships between two pillars Engraved title page of Bacon’s Novum OrganumPublic Domain

That devotion to inquiry followed Bacon to the roadside one day in March 1626. In a biting late-winter chill, he halted his carriage for an impromptu trial. He bought a hen and helped pack its gutted body with fresh snow to test whether freezing alone could prevent decay. Unfortunately, the cold seeped through Bacon’s own body, and within weeks pneumonia claimed him. Bacon’s life ended with an experiment—and set in motion a larger one. In 1660, a group of London thinkers hailed Bacon as their inspiration in founding the Royal Society. Their motto, Nullius in verba (“take no one’s word for it”), committed them to evidence over authority, and their ambition was nothing less than to create a Salomon’s House for England.

Advertisement

The Royal Society and its successors realized fragments of Bacon’s dream, institutionalizing experimental inquiry. Over the following centuries, though, a distorting story took root: Scientists discover nature’s truths, and the rest is just engineering. Nineteenth-century “men of science” pressed for greater recognition and invented the title of “scientist,” creating a new professional hierarchy. Across the Atlantic, U.S. engineers adopted the rigorous science-based curricula of French and German technical schools and recast engineering as “applied science” to gain institutional legitimacy.

We still call engineering “applied science,” a label that retrofits and reverses history. Alongside it stands “technology,” a catchall word that obscures as much as it describes. And we speak of “development” as if ideas cascade neatly from theory to practice. But creation and comprehension have been partners from the start. Yes, theory does equip engineers with tools to push for further insights. But knowing often follows making, arising from things that someone made work.

Bacon’s imaginary academy offered only fleeting glimpses of its inventions and methods. Yet he had seen the real thing: engineers like Drebbel and de Caus who tested, erred, iterated, and pushed their contraptions past the edge of known theory. From his observations of those muddy, noisy endeavors, Bacon forged his blueprint for organized inquiry. Later generations of scientists would reduce Bacon’s ideas to the clean, orderly “scientific method.” But in the process, they lost sight of its inventive roots.

From Your Site Articles

Advertisement

Related Articles Around the Web

Source link

Continue Reading

Tech

Trump’s campaign to preempt state AI regulation faces resistance from states and Congress alike

Published

on

In short: The Trump administration is waging a multi-front campaign to prevent states from regulating AI, using a DOJ litigation task force, Commerce Department evaluations of “burdensome” state laws, and a legislative framework urging Congress to preempt state-level regulation with a “minimally burdensome national standard.” But states have accelerated in the opposite direction – 1,208 AI bills introduced in 2025, 145 enacted – and Congress has rejected preemption twice, including a 99-1 Senate vote to strip an AI moratorium from the One Big Beautiful Bill Act.

Doug Fiefia is a first-term Republican state representative from Herriman, Utah, and a former Google salesperson who managed a team working on the company’s early AI model implementation. Earlier this year, he introduced House Bill 286, the Artificial Intelligence Transparency Act, which would have required frontier AI companies to publish safety and child-protection plans and included whistleblower protections for employees who report safety concerns. It passed a House committee unanimously. Then the White House killed it.

On 12 February, the White House Office of Intergovernmental Affairs sent a letter to Utah Senate Majority Leader Kirk Cullimore Jr. stating: “We are categorically opposed to Utah HB 286 and view it as an unfixable bill that goes against the Administration’s AI Agenda.” Officials held several conversations with Fiefia over the preceding two weeks urging him not to move the bill forward. They did not offer specific changes that could make it acceptable. The bill died in the Senate.

Fiefia’s response was pointed. He said it was especially important to stand up for states’ rights when a fellow Republican was in power, to demonstrate that the principle was not partisan. His bill targeted only “frontier developers,” companies using at least 10^26 floating-point operations to train a model, and carried a $1 million penalty cap. It was, by the standards of AI legislation, modest. The White House treated it as existential.

Advertisement

The federal architecture

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

The Trump administration’s campaign against state AI regulation has three components, each building on the last.

The first was Executive Order 14365, signed on 11 December 2025, titled “Ensuring a National Policy Framework for Artificial Intelligence.” It created an AI Litigation Task Force within the Department of Justice, operational from 10 January 2026, to challenge state AI laws in federal court on grounds of unconstitutional burden on interstate commerce or federal preemption. It directed the Secretary of Commerce to publish by 11 March a comprehensive evaluation of state AI laws identifying “burdensome” ones, and instructed the FTC to issue a policy statement on when state laws are preempted by the FTC Act. It conditioned access to federal broadband funding on states’ willingness to avoid enacting what the administration considers onerous AI laws. The executive order carved out child safety protections, data centre zoning authority, and state government procurement from preemption.

Advertisement

The second was the Commerce Department’s evaluation, published on the March deadline, which flagged laws in Colorado, California, and New York for particular scrutiny. The evaluation feeds into the DOJ task force, which is expected to begin filing federal legal challenges by summer 2026. Cases are projected to take two to three years to resolve.

The third was a National Policy Framework for AI released on 20 March, containing legislative recommendations organised around seven pillars: child protection, AI infrastructure, intellectual property, censorship and free speech, innovation, workforce preparation, and preemption of state AI laws. The framework states that “Congress should preempt state AI laws that impose undue burdens to ensure a minimally burdensome national standard consistent with these recommendations, not fifty discordant ones.” The administration’s position on copyright is that training AI models on copyrighted material “does not violate copyright laws.” On content moderation, it urges Congress to prevent the federal government “from coercing technology providers, including AI providers, to ban, compel, or alter content based on partisan or ideological agendas.”

David Sacks, who served as AI and crypto czar until transferring to a presidential advisory committee role in late March, framed the logic bluntly: “You’ve got 50 different states regulating this in 50 different ways, and it’s creating a patchwork of regulation that’s difficult for our innovators to comply with.” On Colorado’s algorithmic discrimination rules, he said they raised “very serious First Amendment concerns.” On blue states more broadly: “We don’t like seeing blue states trying to insert their woke ideology in AI models, and we really want to try and stop that.”

What the states have done

The states have not been idle while Washington argues about whether they should be allowed to act. In 2023, fewer than 200 AI bills were introduced across state legislatures. In 2024, the number rose to 635 across 45 states, with 99 enacted. In 2025, 1,208 AI-related bills were introduced across all 50 states, the first year every state introduced at least one, and 145 were enacted into law. In the first two months of 2026 alone, 78 chatbot-specific safety bills were filed across 27 states.

Advertisement

California’s Transparency in Frontier Artificial Intelligence Act took effect on 1 January 2026. Texas’s Responsible Artificial Intelligence Governance Act became effective the same day. Colorado’s AI Act, which bans algorithmic discrimination, had its effective date delayed to 30 June 2026. The volume of legislation reflects a bipartisan consensus at the state level that AI regulation cannot wait for a Congress that has repeatedly failed to act.

Utah Governor Spencer Cox, a Republican, has asserted that states should retain the power to regulate AI. “Let’s use this technology to benefit humankind, and let’s regulate it to make sure they don’t destroy humankind,” he said. “I don’t think that’s a contradiction.” He warned that if AI companies “start selling sexualised chatbots to kids in my state, now I have a problem with that,” and announced a “pro-human” AI initiative with $10 million for workforce readiness.

Congress cannot agree

The administration’s framework requires Congressional action to gain legal force. The executive order itself does not preempt, repeal, or invalidate any state AI law. Until courts rule on specific challenges, regulated parties must continue to comply with state regulations.

The most comprehensive federal AI bill is Senator Marsha Blackburn’s TRUMP AMERICA AI Act, a 291-page discussion draft released on 18 March. It would impose a duty of care for high-risk AI systems, require developers to publish training and inference data use records, repeal Section 230 of the Communications Decency Act, and create an AI liability framework enabling the Attorney General, state attorneys general, and private actors to sue AI developers. It would preempt state laws on frontier AI catastrophic risk management and largely preempt state digital replica laws. It remains a discussion draft and has not been formally introduced.

Advertisement

The One Big Beautiful Bill Act originally included a provision for a ten-year moratorium on state AI regulation, later reduced to five years tied to federal broadband funding. The Senate voted 99 to 1 to strip the AI preemption provision, with only Senator Thom Tillis of North Carolina voting to keep it. The bill was signed into law on 4 July without any restrictions on state AI legislation. Congress’s message was unambiguous: the guardrail question is not settled.

The money behind the fight

The lobbying infrastructure on both sides has scaled to match the stakes. Leading the Future, a super PAC launched in August 2025 by Andreessen Horowitz and OpenAI president Greg Brockman, raised $125 million in 2025 and had $70 million on hand at year end. It supports candidates favouring AI-friendly policies and uniform federal regulation over state-by-state approaches.

On the other side, Anthropic donated $20 million in February 2026 to Public First Action, a bipartisan group that plans to back 30 to 50 candidates from both parties who support AI safeguards. Public First’s broader network of super PACs has pledged $50 million for pro-regulation candidates. The tech industry reportedly spent more than $1 billion in total efforts to prevent states from regulating AI.

A bipartisan coalition of 36 state attorneys general sent a letter to Congress opposing AI preemption, arguing that risks including scams, deepfakes, and harmful interactions, especially for children and seniors, make state protections essential. Colorado’s attorney general has committed to challenging the executive order in court.

Advertisement

The precedent that matters

The administration revoked Biden’s Executive Order 14110 within hours of taking office on 20 January 2025, calling it “unnecessarily burdensome.” That order had required developers to conduct pre-release safety evaluations and share findings with the government. Its replacement, signed three days later, was titled “Removing Barriers to American Leadership in Artificial Intelligence.” The trajectory from revoking federal safety requirements to attempting to prevent states from creating their own has a logic: if the federal government will not regulate AI, and it will not allow states to regulate AI, then AI will not be regulated.

The contrast with Europe is instructive. The EU AI Act entered full enforcement in January 2026, creating a single regulatory framework across 27 member states. The US approach is the inverse: no binding federal standard and an active campaign to prevent the states from filling the gap. The result is that AI governance in America is being determined not by legislation or regulation but by litigation, executive orders, and the political leverage of the companies that stand to benefit most from the absence of rules.

Doug Fiefia, the Utah Republican who watched his transparency bill die after a White House letter, is now running for state senate. His opponent, the incumbent who helped kill the bill, reportedly said it “would have driven Utah out of the AI innovation business.” Fiefia co-chairs the AI task force of the Future Caucus alongside Monique Priestley, a Vermont Democrat with 24 years in technology. They represent a generation of state lawmakers who have worked in tech, understand what AI can do, and believe that understanding should inform regulation rather than prevent it. The question is whether the regulatory vacuum they are trying to fill will last long enough to become permanent.

Advertisement

Source link

Continue Reading

Tech

6 Highly-Rated Kitchen Appliances On Amazon That Are Not Ninja Products

Published

on





We may receive a commission on purchases made from links.

Ninja has a chokehold on the small kitchen appliance category, and for good reason. It’s innovative and delivers quality that consumers trust. Ninja has earned the hype. But it’s not the end-all, be-all brand when it comes to stocking your kitchen, especially if you prefer to shop on Amazon.

Whether you’re air frying dinner for the family or making frozen treats for dessert, there are other highly-rated brands and products on Amazon that can do the job well, and often at a lower price. Appliances that don’t have the Ninja brand stamped on the front can still outperform your expectations. This collection of highly rated kitchen appliances on Amazon that are not Ninja products deserves just as much attention as the Ninja products you likely already know and love. Or in some cases, maybe more. If you’re ready to upgrade your kitchen without defaulting to the usual suspects, let’s shake things up a bit.

Advertisement

CASABREWS CM5418 Espresso Machine

Many people see home espresso machines as unnecessary luxuries. But if you treat your morning coffee as a survival tool, you know that an espresso machine holds just as much value as any other coffeemaker. That extra pop of caffeine in your drink means you can skip the pricey coffee shop on your commute and get the morning buzz you need to get moving.

Advertisement

Ninja’s espresso machine is far from your only option. The Casabrews espresso machine offers form and function in a single package. It can punch out a shot of espresso quickly and cleanly, and even steam and froth your milk on the same device. Stainless steel works well in any kitchen, and a small, narrow footprint means it doesn’t take up as much counter space as your typical coffee machine. Plus, you get to make your drink exactly how you want it, every time. The Casabrews espresso machine is $139.99 on Amazon. It has earned an average 4.4-star rating across more than 7,000 user reviews on Amazon, with users consistently mentioning simplicity, quality, and value for the money. By comparison, SharkNinja’s espresso and coffee barista systems start at $279.99.

Advertisement

Cuisinart Ice Cream Maker

Making ice cream at home feels like more effort than it’s worth until you find a decent ice cream maker. Then it makes perfect sense, especially since you can control the ingredients. One option that makes the process easy and worthwhile is the Cuisinart Ice Cream Maker. It does most of the heavy lifting to make limited-ingredient ice cream, sorbet, and yogurt. Making these treats at home means you can control what goes into them, resulting in healthier options.

The Cuisinart ice cream maker has earned an average 4.6-star rating across more than 18,000 user reviews. It says it can turn your raw ingredients into a ready-to-eat dessert in under 30 minutes. The container is big enough to make up to two quarts at a time. Ninja offers a similar appliance, called the Creami. It compares to the Cuisinart in size and function, but Ninja Creami ice cream makers start at $199.99, almost $100 more than the Cuisinart.

Advertisement

BKPPM Slushie Maker

A slushie maker sounds like one of those cool kitchen gadgets you’re excited to buy, use a few times, and then forget you have it. That may be true for some slushie machines, but the ones that make the process easy and delicious are less likely to become cabinet clutter. The good thing about the BKPPM Slushie Maker on Amazon is that you don’t need special mixes or learn lots of steps to use it. You can add your favorite juice, wine, or even soda, then let the machine work its magic.

The Ninja Slushi offers a similar experience. It comes with multiple preset modes for one-touch operation and can make a variety of drinks, including slushies, milkshakes, frappes, and spiked drinks. Neither machine requires ice, and both promote dishwasher-safe parts for easy cleanup. One of the most notable differences is price: The Ninja version starts at $349.99 and goes up from there, while the BKPPM Slushie Maker on Amazon retails for $269.99. The BKPPM Slushie Maker has also earned an average 4.4-star rating over more than 1,000 customer reviews.

Advertisement

Cosori Air Fryer

Air fryers get a lot of attention from home chefs. There’s a good reason for that: they’re among the most versatile and most recommended small kitchen appliances you can get. Air fryers let you get crispy, fried-style food without drenching it in oil first. There are tons of air fryers on the market right now, including Ninja’s popular Crispi line of glass air fryers. But if you’re not looking to shell out $179.99 or more for one, you might want to check out the Cosori Air Fryer on Amazon.

The Cosori retails for $119.99 (regular price) and has an impressive 4.8-star rating over more than 15,000 reviews. Customers consistently mention the cooking performance, ease of cleaning, quality, and noise level of this air fryer. Ultimately, a good air fryer should cook your food evenly, keep it crisp, and do both quickly and easily. The Cosori checks all of these boxes, according to its users. It can reach temperatures of up to 450 degrees Fahrenheit and runs at a fairly quiet 53 decibels. The basket types are the biggest difference (along with price), but if you’re not picky about what your food actually cooks in, the Cosori might make a great alternative.

Advertisement

Nutribullet Blender System

The only thing better than a good blender is a whole blending system. While a blender covers the basics, a full blending system changes how often you actually use it. A single powerful base comes with multiple blending blades and attachments, including a drink pitcher, food processor, and single-serve containers for on-the-go drinks or small batches of soups. You need different containers and blades for different jobs, and a solid kitchen system can do them all.

Advertisement

Ninja offers a line of kitchen blending systems, but so do plenty of other kitchen brands. One comparable example is the Nutribullet Triple Prep System on Amazon. It includes a mix of full-size and single-serve containers, along with a food processor container and various accessories. The smart base recognizes each container when you attach it, and you can choose from several pre-programmed settings to get ideal blends for specific ingredients. The Nutribullet system has garnered a 4.5-star rating across more than 700 reviews. Pricewise, the Nutribullet system retails on Amazon for $219.99, which is also the starting price for Ninja’s lineup.

Advertisement

Hamilton Beach Countertop Grill

Getting a good sear indoors usually comes with tradeoffs. Indoor countertop grills can be a bit smoky. Heat might be uneven, and results don’t often compare to those of a real grill. Still, countertop grills are becoming more popular since they don’t require a dedicated space outdoors and don’t take up much room to begin with. In the classic Ninja style, the brand offers several models to choose from, starting at $149.99. But one option from Hamilton Beach can help you save money without compromising on quality.

Hamilton Beach’s Electric Indoor Searing Grill is compact and simple to use. There’s one temperature control switch, a drip tray, and not much else. Since it’s made for indoors, you can enjoy your favorite grilled foods year-round in any type of weather. Even better, the Hamilton Beach option is listed at $98.57 on Amazon, significantly less than Ninja’s cheapest indoor grill. More than 31,000 customers have rated the Hamilton Beach indoor grill, resulting in a 4.5-star rating. Users say it’s easy to clean, and its performance compares to that of an outdoor grill.

Advertisement

How We Chose These Top-Rated Appliances on Amazon

The title gives away most of the requirements. We’re looking for items that fall under the kitchen appliance category and are available for sale on Amazon. Also, they have to be from a brand other than Ninja, which also includes the Shark name. We focused our search on the kitchen appliances that Ninja offers, then found a comparable brand and product that users seem to love. As the title suggests, they need to be highly rated. That means hundreds of four-star and five-star reviews with similar themes in quality, value, function, and usefulness. In other words, are most people happy with their purchase?

Only kitchen appliances that meet all of the above made it to the list. There are tons of great kitchen appliances out there that can comfortably compete with Ninja. This list focuses on just six of those options.

Advertisement



Source link

Continue Reading

Tech

Nevada Police Can Now Track Cellphones Without a Warrant

Published

on

“Nevada quietly signed an agreement earlier this year with a company that collects location data from cellphones, allowing police to track a device virtually in real time,” reports the Associated Press. “All without a warrant.”

The software from Fog Data Science, adopted this January in Nevada through a Department of Public Safety contract, pulls information from smartphone apps in order to let state investigators identify the location of mobile devices. The state is allowed more than 250 queries a month using the tool, which allows officers to track a device’s location over long stretches of time and enables them to see what Fog calls “patterns of life,” according to company documents from 2022. It can help them deduce where and when people work and live, with whom they associate and what places they visit, according to privacy experts… Traditionally, police must obtain a warrant from a judge to access cellphone location information — a process that can take days or weeks. And while cellphone users may be aware that they are sharing their location through apps such as Google Maps, critics say few are aware that such information can make its way to police…

Other agencies in Nevada have been known to use technology similar to Fog. In 2013, Las Vegas Metropolitan Police Department acquired something known as a cell-site simulator that mimics cellphone towers and can sweep up signals from entire areas to track individuals, with some models capable of intercepting texts and calls. Police have not released detailed information about the technology since then.

“Police in other states have said the technology (and its low price tag) has helped expand investigatory capacity,” the article adds.

But it also points out that Fog Data Science has a web page letting individuals opt out of all their data sets.

Source link

Advertisement
Continue Reading

Tech

I tested the Ultion Nuki 2025: the most well-rounded smart lock in the UK for ultimate peace of mind

Published

on

Why you can trust TechRadar


We spend hours testing every product or service we review, so you can be sure you’re buying the best. Find out more about how we test.

Ultion Nuki 2025: one-minute review

The Ultion Nuki 2025 is what happens when a smart lock starts behaving like a complete security product.

At a glance, it’s doing the same job as 2023’s Ultion Nuki Plus: pairing Brisant Secure’s Ultion 3 Star PLUS cylinder and UK-specific door furniture with Nuki’s Smart Lock Pro and platform. In practice, though, this version looks more cohesive, feels quicker to respond and is better aligned with how people actually use a front door every day.

Advertisement

Ultion Nuki smart lock installed on exterior of door

(Image credit: Future)

Just as importantly, there are sensible fallbacks everywhere. You can still use a physical key, operate it manually from inside, and include a biometric keypad or keyfob if you want different ways in.

Source link

Continue Reading

Tech

Equinix’s Peter Lantry on powering Ireland sustainably

Published

on

The latest episode of The Leaders’ Room podcast season four features Peter Lantry, managing director of Equinix Ireland. This series is created in partnership with IDA Ireland.

Once again in season four of The Leaders’ Room podcast, we get to know the leaders of some of the most influential multinationals in tech, life sciences and innovation, as well as getting insights into their leadership styles and the high-tech trends they see coming down the line.

In this latest episode, we speak to Peter Lantry, managing director of Equinix Ireland, about the intersection of energy, digital infrastructure and sustainability – and about what Ireland’s digital future could look like if we get the balance right. It’s a wide-ranging and eye-opening conversation about the global data centre giant that sits at the heart of Ireland’s digital ecosystem, and about a man whose career trajectory is decidedly well-matched to the task at hand.

Advertisement

Equinix is the world’s leading co-location retail data centre provider – something Lantry describes, cleverly, as akin to being a “digital airport”, connecting networks, cloud platforms, content providers and enterprises across more than 280 data centres in 35 countries. It works with major players from Nvidia and AWS to Google, as well as with smaller retail clients.

In Ireland, while Equinix has been here 10 years, many of the data centres it now owns, like those of Telecity, have been operating since 1998. The Irish operations have grown significantly since, most recently with the acquisition of two BT data centres and a new Blanchardstown facility, DB7X, now under construction.

What strikes you listening to Lantry is the sheer scale of what Equinix does – more than half a million direct connections between businesses globally, and more than 90pc of all internet traffic in the world flowing through their data centres. The subsea cables that connect Ireland to the rest of the world terminate in Dublin, most of them into an Equinix data centre.

The energy and sustainability conversation is where this episode really catches the imagination. Lantry and his team are doing genuinely pioneering things at Equinix Ireland – hydrogen fuel cells already operating at one of their Dublin sites, solar canopies going in, and an innovative grid solution planned working with the IDA, EirGrid and ESB Networks.

Advertisement

Lantry believes Ireland has a real opportunity, with its ambition to have 22GW of renewable power connecting to the grid by 2030. The question, he says, isn’t whether Ireland can become a leading sustainability hub, but whether we have the collective will to all work together and make it happen.

His vision of data centres that can flex dynamically with the grid – stepping in to support it when needed, rather than adding to its burden – is a compelling one. If we export our data and digital services rather than our electricity, he argues, we could generate perhaps 10 times the value for the Irish economy, so it is crucial, he believes, that we get our digital infrastructure right.

Lantry’s career trajectory means it’s easy to see why Equinix came calling. Starting as a civil and structural engineer with Arup, moving into management science and then consultancy with PwC and IBM, followed by 17 formative years with EirGrid – where he was connecting data centre customers, wind farms and working on the design and implementation of the Irish single electricity market. This was followed by a spell as managing director of Hitachi Energy, where he grew their global data centre business from €350m to €750m in a single year.

It is a CV that makes you understand why his Equinix colleagues remarked, with some amusement, that he was “fairly unique” when the energy crunch hit. He brings something genuinely rare to the role – a deep, practical understanding of both utilities and digital infrastructure, earned over several decades.

Advertisement

On leadership, Lantry talks about Level 5 leadership, referencing James Collins’ book ‘Good to Great’ – leading by example, listening deeply, supporting others and removing the barriers that stop teams from delivering. What comes through clearly is his sense of purpose: the utility-like nature of what Equinix does, connecting everyone and everything in a sustainable way, gives the whole team something genuinely meaningful to rally behind, he says.

I found his emphasis on being fully present in every conversation particularly striking – that good leadership means making the people you are talking with feel truly heard and understood. He describes himself as something of a translator, someone who has spent a career connecting the dots between brilliant people with different expertise and different drivers. Perhaps that instinct was shaped early he says. Lantry grew up moving between countries with his parents – the Netherlands, England, France, Colombia, and back to Ireland – learning to navigate different cultures and ways of engaging. Whatever its roots, it is clearly central to how he leads today.

We’re grateful to all our interviewees again this season, for taking the time out of busy schedules to come into the studio and share their insights and their intelligence with us. And a big thanks as ever to our partners IDA Ireland who make this series possible.

The Leaders’ Room podcast is released fortnightly and can be found by searching for ‘The Leaders’ Room’ wherever you get your podcasts. For those who prefer their audio with visuals, filmed versions of the podcast interviews are all available here on SiliconRepublic.com.

Advertisement

Check out The Leaders’ Room podcast for in-depth insights from some of Ireland’s top leaders. Listen now on Spotify, on Apple or wherever you get your podcasts.

Source link

Advertisement
Continue Reading

Tech

Slack chats and internal data from failed startups are finding a second life in AI training

Published

on


What was once considered operational residue is now being packaged, scrubbed, and sold to AI developers seeking richer training environments. The shift reflects a broader evolution in how advanced AI models are built. Early large language models drew heavily from news archives, Wikipedia, and forums. Now, newer systems, particularly agentic…
Read Entire Article
Source link

Continue Reading

Trending

Copyright © 2025