Connect with us
DAPA Banner

Tech

Trump Gets $10 Billion Kickback To The Treasury For Offloading TikTok To His Billionaire Buddies

Published

on

from the bribes-for-the-king dept

We’ve discussed at length how Trump’s “fix” for TikTok’s problems basically involved forcing the sale of the platform to his greedy billionaire buddies (with the help of pathetic Democrats). The deal fixed none of the real issues Trumpland pretended to be concerned about (national security, privacy, propaganda), and China still maintains a significant ownership stake.

It was one of the more embarrassing examples of U.S. cronyism and corruption in recent memory.

But wait, as they say, there’s more!

As the Wall Street Journal notes (paywalled), the “Trump administration” is set to receive a $10 billion fee from investors for facilitating the deal. The new owners, which include Trump’s friend Larry Ellison, private equity giant Silver Lake, and MGX (controlled by the UAE) are funneling the payments, which will total $10 billion, to the “Treasury Department”:

Advertisement

“They and other backers paid the Treasury Department about $2.5 billion when the deal closed in January and are set to make several additional payments until hitting the $10 billion total, the people said.”

We, of course, don’t actually know where that money is going and will actually be used for. You can confidently assume it will somehow eventually wind its way into Trump’s pocket somehow, since the entirety of U.S. democratic oversight has been wholly corrupted by these whiny zealots, who are busy stripping the country for parts and selling it for scrap off the back loading dock.

Rupert Murdoch’s Wall Street Journal goes to comical lengths to normalize this bribe, though they do at least try to express how “unprecedented” this sort of thing is by citing an unnamed, ambiguous historian:

“The $10 billion payment would be nearly unprecedented for a government helping arrange a transaction, historians have said. Vice President JD Vance previously said the new TikTok entity running the U.S. operations is valued at about $14 billion in the deal, which some tech analysts have said dramatically undervalues the company.”

The outlet goes on to note that the $10 billion fee absolutely towers over any remotely comparable historical precedent:

“Investment bankers advising on a typical deal receive fees of less than 1% of the transaction value, and the percentage generally gets smaller as the deal size increases. Bank of America is in line to make some $130 million for advising railroad operator Norfolk Southern on its $71.5 billion sale to Union Pacificone of the largest fees on record for a single bank on a deal

Administration officials have said the fee is justified given Trump’s role in saving TikTok in the U.S. and navigating negotiations with China to get the deal done while addressing the security concerns of lawmakers. “

Advertisement

The Wall Street Journal can’t be bothered to note that the deal fixed absolutely none of the purported concerns raised about TikTok. China still has a major ownership stake, and the new owners seem every bit as hostile to democracy and free expression as the worst Chinese autocrat (they’re just not honest enough with themselves or you to admit it yet).

All of these owners are equally just as likely to engage in privacy and surveillance violations as the Chinese (which again, despite a lot of pretense, did not have full direct control over the app). In fact, you could even argue that the previous TikTok was likely to be better on all of these subjects because they were at least trying to adhere to ethical standards to remain operating in the country.

TikTok’s new American owners are very up front about their plans to demolish the entirety of regulatory autonomy, corporate oversight, and consumer protection, leaving them with absolute freedom to pursue whatever unethical bullshit they can dream up. I suspect they’ll try to leave things alone for a year (to avoid a mass exodus of young people) before their goals become… unsubtle.

Again, Trump, with Democratic help, managed to steal the world’s most popular short form video app and offload it to his radical billionaire friends under the pretense he was protecting national security and U.S. consumer privacy. Even before you get to this $10 billion bribe, it’s easily one of the ugliest examples of corruption and U.S. tech policy dysfunction we’ve ever seen.

Advertisement

I like to convince myself history will not be kind.

Filed Under: autocrats, billionaires, corruption, donald trump, larry ellison, national security, privacy, propaganda, social media, video

Companies: bytedance, mgx, oracle, silver lake, tiktok

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

IEEE Young Professionals Tackle Skills Gap in Tech

Published

on

The America’s Talent Strategy: Building the Workforce for the Golden Age report, published last year by the U.S. Departments of Commerce, Education, and Labor, identified a significant engineering and skills gap. The 27-page report concluded that the shortage of talent in essential areas—including advanced manufacturing, artificial intelligence, cloud computing, and cybersecurity—poses significant risks to U.S. economic and technological leadership.

To help attract talent in those fields, the Labor Department last month introduced incentives for apprenticeships, including a US $145 million “pay for performance” grant program. The funding aims to develop registered apprenticeships in high-demand fields including artificial intelligence and information technology.

Reacting to the urgent national need for targeted workforce development were members of IEEE Young Professionals, led by Alok Tibrewala, an IEEE senior member. He is a cochair of the IEEE North Jersey Section’s Young Professionals group.

“As a software engineer, this impending shortage concerns me because I believe that the U.S. AI and cybersecurity skills gap would show up first in the early-career pipeline,” Tibrewala says. “Students will be entering the U.S. workforce without enough hands-on experience building secure AI-enabled enterprise and cloud systems, and this gap will persist without practical, mentor-led training before graduation.”

Advertisement

Tibrewala led a strategic planning session with representatives from the New Jersey Institute of Technology, IEEE Member and Geographic Activities, and IEEE Young Professionals to discuss holding an event that would provide practical, industry-relevant training by experts and IEEE leaders.

“I was able to establish a partnership with NJIT, recruit speakers, design the event’s agenda, and promote the event to ensure it was aligned with the strategy outlined in the workforce report,” he says. “This effort aligns with broader U.S. workforce development priorities focused on industry-driven skills training in critical technology areas.”

The IEEE Buildathon event was held on 1 November at NJIT’s Newark campus. More than 30 students and early-career engineers heard from 11 speakers. Through interactive workshops, live demonstrations, and networking opportunities, they left with practical, employer-aligned skills and clearer career pathways for AI-era skills-building.

Tibrewala chaired the event and also serves as chair of the IEEE Buildathon program.

Advertisement

Session takeaways

Region 1 Director Bala S. Prasanna, a life senior member, gave the keynote address. He emphasized the need for universities, industry practitioners, and IEEE volunteer leaders to collaborate on programs to enhance technical skills.

IEEE Member Kalyani Matey, cochair of the IEEE North Jersey Section’s Young Professionals, conducted a workshop on how to build one’s personal brand and a responsive network. Participants received valuable insights about résumé building, effective communication strategies, and enhancing their visibility and employability.

“Over time, this kind of structured, employer-aligned training will help increase confidence, employability, and technical readiness across the country. With sustained support, programs like the IEEE Buildathon can become a practical bridge from education to industry in the AI era.” —Alok Tibrewala

Tibrewala led the Unlocking AI’s Potential: Solving Big Challenges With Smart Data and IEEE DataPort session. The web-based DataPort platform allows researchers to store, share, access, and manage their research datasets in a single, trusted location. He discussed needed skills including AI literacy, strong data handling and dataset stewardship, and turning data into actionable insights.

Advertisement

Chaitali Ladikkar, a senior software engineer, delivered the insightful Brains Behind the Game seminar. Ladikkar, an IEEE member, highlighted the transformative impact AI is having on gaming and game engine technologies. She explained how AI is reshaping game development. She also covered how machine learning is being used for animation, faster content generation and testing of new titles. Her seminar received enthusiastic feedback from participants.

The Building Better Business Relationships DiSC workshop provided insights into enhancing professional relationships and communication within an engineering workforce. DiSC is a behavioral self-assessment used to understand an individual’s communication style and to adapt to others.

Participant experience and testimonials

The event received high praise from participants for its practical and industry-relevant content, according to Tibrewala.

“This training significantly enhanced my understanding and readiness for industry roles, filling gaps my regular academic coursework did not fully address,” said Humna Sultan, an IEEE student member who is a senior studying computer science at Stevens Institute of Technology, in Hoboken, N.J.

Advertisement

“The Buildathon was structured around real engineering challenge scenarios that deepened my understanding of AI and cloud technologies,” said Carlos Figueredo, an IEEE graduate student member who is studying data science at the University of Michigan, in Ann Arbor. “It boosted my confidence and practical skills essential for the industry.”

Bavani Karthikeyan Janaki said “it was incredible to see how technology and sustainability came together to drive real-world impact, thanks to the dedicated efforts of the organizers including Tibrewala, Matey, and the IEEE North Jersey Young Professionals.” Janaki is pursuing a master’s degree in computer and information science at Long Island University, in New York.

Funding and collaborative efforts

The Buildathon was made possible through grants from the IEEE Young Professionals group and funding from the IEEE North Jersey Section and IEEE Member and Geographic Activities. Their support shows how IEEE’s professional organizations can collaborate to address workforce needs by supporting the delivery of technical sessions that strengthen early-career pipelines.

Future plans and a call to action

Building on the event’s success, Tibrewala and Matey plan to make the IEEE Buildathon an ongoing initiative. They are exploring ways to expand it to additional university campuses and IEEE communities.

Advertisement

Tibrewala says they plan to refine the format based on participant feedback and lessons learned. To support consistent quality, he and Matey say, they are working on a playbook for organizers that will include a repeatable agenda, a workshop template, speaker guidelines, and post-event feedback forms.

The approach depends on continued coordination among host universities, local IEEE sections, and Young Professional volunteers, Tibrewala says.

“Enabling other groups to run similar events,” he says, “can help more students and early-career engineers gain practical exposure to AI, data, cloud, cybersecurity, and other key emerging technologies in a structured setting.

“Efforts like this help translate national workforce priorities into real training that students and early-career engineers can apply immediately to their projects. This also helps close the gap between classroom learning and the realities of building secure, reliable systems in production environments. Over time, this kind of structured, employer-aligned training will help increase confidence, employability, and technical readiness across the country.

Advertisement

“With sustained support, programs like the IEEE Buildathon can become a practical bridge from education to industry in the AI era.”

From Your Site Articles

Related Articles Around the Web

Source link

Advertisement
Continue Reading

Tech

GridEx Highlights Drone Risks to Power Grids

Published

on

In the fictional nation of Beryllia, the 2026 World Chalice Games were set to begin as the country faced an unrelenting heat wave. The grid, already under strain from the circumstances, was dealt a further blow when a coordinated set of attacks including vandalism, drone, and ballistic attacks by an adversary, Crimsonia, crippled the grid’s physical infrastructure.

This scenario, inspired by the upcoming 2026 World Cup and the 2028 Olympic Games in Los Angeles, was an exercise in studying how utilities can prevent and mitigate, among other dangers, physical attacks on power grids. Called GridEx, the exercise was hosted by the Electricity Information Sharing and Analysis Center (E-ISAC) from 18 to 20 November, 2025. GridEx has been held every two years since 2011.

“We know that threat actors look to exploit certain circumstances,” says Michael Ball, CEO of E-ISAC, which is a program of the North American Electric Reliability Corporation (NERC), about designing the Beryllia scenario. “The Chalice Games became a good example of how we could build a scenario around a threat actor.”

Physical attacks on the grid are rising in the U.S., and GridEx attendance was up in November as utilities grapple with how to prevent and mitigate attacks. Participation in the exercise was at its highest level since 2019, according to a report released on 2 March. Given the number of organizations present, GridEx estimates that more than 28,000 individual players participated, including utility workers and government partners, an all-time high since the exercise began.

Advertisement

Rising Physical Threats to Power Grids

The U.S. and Canadian grids face growing security issues from physical threats, including vandalism, assault of utility workers, intrusion of property, and theft of components, like copper wiring. NERC’s 2025 E-ISAC end of year report cites more than 3,500 physical security breaches that calendar year, about 3 percent of which disrupted electricity. That’s up from 2,800 events cited in the 2023 report (3 percent of those also resulted in electricity disruptions). Yet despite a number of recent high-profile attacks in the U.S., physical attacks on the grid are happening worldwide.

“They’re not uniquely a U.S. thing,” says Danielle Russo, executive director of the Center for Grid Security at Securing America’s Future Energy, a nonpartisan organization focused on advancing national energy security. Russo says that while attacks are common in places like Ukraine, they’re not limited to wartime scenarios. “Other countries that are not experiencing direct conflict are experiencing increasing amounts of physical attacks on their energy infrastructure,” she says. Take Germany for example: On 3 January, an arson attack by left-wing activists in Berlin caused a five-day blackout impacting 45,000 households. That comes after a suspected arson attack on two pylons in September 2025 left 50,000 Berlin households without power. Some German officials cite domestic extremism and fears of Russian sabotage in recent years as reasons for heightened security concerns over critical infrastructure.

The uptick in attacks on the U.S. grid has been anchored by a number of incidents in recent years. In December 2025, an engineer in San Jose, California was sentenced to 10 years in prison for bombing electric transformers in 2022 and 2023. A Tennessee man was arrested in November 2024 for attempting to attack a Nashville substation using a drone armed with explosives. And in 2023, a neo-Nazi leader was among two arrested in a plot to attack five substations around Baltimore with firearms, part of an increasing trend in white supremacist groups planning to attack the U.S. energy sector.

“Since [E-ISAC] started publishing data back in 2016, we’ve seen a large and consistent increase in the number of reported physical security incidents per year,” says Michael Coe, the vice president of physical and cyber security programs at the American Public Power Association, a trade group that works with E-ISAC to plan GridEx. While not all data is publicly available, Coe says there’s been a “tenfold” increase over the past decade in the number of reported physical attacks on the grid.

Advertisement

Drone Attacks: A Growing Security Challenge

During the fictional World Chalice Games scenario, drone attacks destroyed Beryllia’s substation equipment, highlighting a threat that’s gained traction as more drones enter the airspace.

“The question we get all the time is, how do you tell if it’s a bad actor, or if it’s a 12-year-old kid that got the drone for their birthday?” says Erika Willis, the program manager for the substations team at the Electric Power Research Institute (EPRI).

One strategy to track and alert utilities to potential threats such as drones is called sensor fusion. The system includes a pan-tilt-zoom camera capable of 360-degree motion mounted on top of a tripod or pole with four installed radars. The radars combine with the camera for a dual system that can track drones even if they’re obstructed from view, says Willis. For instance, if a nearby drone flies behind a tree, hidden from the camera, the radars will still pick up on it. The technology is currently being tested at EPRI’s labs in Charlotte, North Carolina and Lenox, Massachusetts.

EPRI is also exploring how robotics and AI can improve security systems, Willis says. One approach involves integrating AI analysis into robotic technology already surveilling substation perimeters. Using AI can improve detection of break-ins and damage to fencing around substations, Willis says. “As opposed to a human having to go through 200 images of a fence, you can have the AI overlays do some of those algorithms…If the robot has done the inspection of the substation 100 times, it can then relay to you that there’s an anomaly,” Willis says.

Advertisement

A fiber sensing technology unit, roughly the size and shape of a filing cabinet. Prisma Photonics deploys fiber sensing technology that uses reflected optical signals to detect perturbations from vehicles and other sources near underground fiber cable.Prisma Photonics

Already, a number of utilities in the U.S. are using AI integrations in their security and monitoring processes. That’s thanks in part to the Tel Aviv, Israel-based Prisma Photonics, a software company that launched in 2017 and has since deployed its fiber sensing technology across thousands of miles of transmission infrastructure in the U.S., Canada, Europe, and Israel. A file-cabinet-sized unit plugs into a substation and sends light pulses down existing fiber optic cables 30 miles in each direction. As the pulses travel down the cables, a tiny fraction of the light is reflected back to the substation unit. An AI model processes the results and can classify events based on patterns in the optical signal as a result of perturbations happening around the fiber cable.

“If we identify an event that we don’t have a classification for, and we get a feedback from a customer saying, ‘oh, this was a car crash,’ then we can classify that in the model to say this is actually what happened,” says Tiffany Menhorn, Prisma Photonics’ vice president of North America.

As preparations get underway for the ninth GridEx in 2027, Ball says participation in the exercises alone isn’t enough to bolster grid security. Instead, he wants utilities to take what they learn from the training and apply it in their own operations. “It’s the action of doing it, versus our statistic of saying, ‘here’s what our growth was.’ That growth should relate to the readiness and capability of the industry.”

Advertisement

I changed the tense on this because the subsequent sentences use past tense. It seemed weird to switch from present tense in the first sentence to past tense in the rest of the paragraph, but I could be mistaken.

From Your Site Articles

Related Articles Around the Web

Source link

Advertisement
Continue Reading

Tech

Nvidia’s DGX Station is a desktop supercomputer that runs trillion-parameter AI models without the cloud

Published

on

Nvidia on Monday unveiled a deskside supercomputer powerful enough to run AI models with up to one trillion parameters — roughly the scale of GPT-4 — without touching the cloud. The machine, called the DGX Station, packs 748 gigabytes of coherent memory and 20 petaflops of compute into a box that sits next to a monitor, and it may be the most significant personal computing product since the original Mac Pro convinced creative professionals to abandon workstations.

The announcement, made at the company’s annual GTC conference in San Jose, lands at a moment when the AI industry is grappling with a fundamental tension: the most powerful models in the world require enormous data center infrastructure, but the developers and enterprises building on those models increasingly want to keep their data, their agents, and their intellectual property local. The DGX Station is Nvidia’s answer — a six-figure machine that collapses the distance between AI’s frontier and a single engineer’s desk.

What 20 petaflops on your desktop actually means

The DGX Station is built around the new GB300 Grace Blackwell Ultra Desktop Superchip, which fuses a 72-core Grace CPU and a Blackwell Ultra GPU through Nvidia’s NVLink-C2C interconnect. That link provides 1.8 terabytes per second of coherent bandwidth between the two processors — seven times the speed of PCIe Gen 6 — which means the CPU and GPU share a single, seamless pool of memory without the bottlenecks that typically cripple desktop AI work.

Twenty petaflops — 20 quadrillion operations per second — would have ranked this machine among the world’s top supercomputers less than a decade ago. The Summit system at Oak Ridge National Laboratory, which held the global No. 1 spot in 2018, delivered roughly ten times that performance but occupied a room the size of two basketball courts. Nvidia is packaging a meaningful fraction of that capability into something that plugs into a wall outlet.

Advertisement

The 748 GB of unified memory is arguably the more important number. Trillion-parameter models are enormous neural networks that must be loaded entirely into memory to run. Without sufficient memory, no amount of processing speed matters — the model simply won’t fit. The DGX Station clears that bar, and it does so with a coherent architecture that eliminates the latency penalties of shuttling data between CPU and GPU memory pools.

Always-on agents need always-on hardware

Nvidia designed the DGX Station explicitly for what it sees as the next phase of AI: autonomous agents that reason, plan, write code, and execute tasks continuously — not just systems that respond to prompts. Every major announcement at GTC 2026 reinforced this “agentic AI” thesis, and the DGX Station is where those agents are meant to be built and run.

The key pairing is NemoClaw, a new open-source stack that Nvidia also announced Monday. NemoClaw bundles Nvidia’s Nemotron open models with OpenShell, a secure runtime that enforces policy-based security, network, and privacy guardrails for autonomous agents. A single command installs the entire stack. Jensen Huang, Nvidia’s founder and CEO, framed the combination in unmistakable terms, calling OpenClaw — the broader agent platform NemoClaw supports — “the operating system for personal AI” and comparing it directly to Mac and Windows.

The argument is straightforward: cloud instances spin up and down on demand, but always-on agents need persistent compute, persistent memory, and persistent state. A machine under your desk, running 24/7 with local data and local models inside a security sandbox, is architecturally better suited to that workload than a rented GPU in someone else’s data center. The DGX Station can operate as a personal supercomputer for a solo developer or as a shared compute node for teams, and it supports air-gapped configurations for classified or regulated environments where data can never leave the building.

Advertisement

From desk prototype to data center production in zero rewrites

One of the cleverest aspects of the DGX Station’s design is what Nvidia calls architectural continuity. Applications built on the machine migrate seamlessly to the company’s GB300 NVL72 data center systems — 72-GPU racks designed for hyperscale AI factories — without rearchitecting a single line of code. Nvidia is selling a vertically integrated pipeline: prototype at your desk, then scale to the cloud when you’re ready.

This matters because the biggest hidden cost in AI development today isn’t compute — it’s the engineering time lost to rewriting code for different hardware configurations. A model fine-tuned on a local GPU cluster often requires substantial rework to deploy on cloud infrastructure with different memory architectures, networking stacks, and software dependencies. The DGX Station eliminates that friction by running the same NVIDIA AI software stack that powers every tier of Nvidia’s infrastructure, from the DGX Spark to the Vera Rubin NVL72.

Nvidia also expanded the DGX Spark, the Station’s smaller sibling, with new clustering support. Up to four Spark units can now operate as a unified system with near-linear performance scaling — a “desktop data center” that fits on a conference table without rack infrastructure or an IT ticket. For teams that need to fine-tune mid-size models or develop smaller-scale agents, clustered Sparks offer a credible departmental AI platform at a fraction of the Station’s cost.

The early buyers reveal where the market is heading

The initial customer roster for DGX Station maps the industries where AI is transitioning fastest from experiment to daily operating tool. Snowflake is using the system to locally test its open-source Arctic training framework. EPRI, the Electric Power Research Institute, is advancing AI-powered weather forecasting to strengthen electrical grid reliability. Medivis is integrating vision language models into surgical workflows. Microsoft Research and Cornell have deployed the systems for hands-on AI training at scale.

Advertisement

Systems are available to order now and will ship in the coming months from ASUS, Dell Technologies, GIGABYTE, MSI, and Supermicro, with HP joining later in the year. Nvidia hasn’t disclosed pricing, but the GB300 components and the company’s historical DGX pricing suggest a six-figure investment — expensive by workstation standards, but remarkably cheap compared to the cloud GPU costs of running trillion-parameter inference at scale.

The list of supported models underscores how open the AI ecosystem has become: developers can run and fine-tune OpenAI’s gpt-oss-120b, Google Gemma 3, Qwen3, Mistral Large 3, DeepSeek V3.2, and Nvidia’s own Nemotron models, among others. The DGX Station is model-agnostic by design — a hardware Switzerland in an industry where model allegiances shift quarterly.

Nvidia’s real strategy: own every layer of the AI stack, from orbit to office

The DGX Station didn’t arrive in a vacuum. It was one piece of a sweeping set of GTC 2026 announcements that collectively map Nvidia’s ambition to supply AI compute at literally every physical scale.

At the top, Nvidia unveiled the Vera Rubin platform — seven new chips in full production — anchored by the Vera Rubin NVL72 rack, which integrates 72 next-generation Rubin GPUs and claims up to 10x higher inference throughput per watt compared to the current Blackwell generation. The Vera CPU, with 88 custom Olympus cores, targets the orchestration layer that agentic workloads increasingly demand. At the far frontier, Nvidia announced the Vera Rubin Space Module for orbital data centers, delivering 25x more AI compute for space-based inference than the H100.

Advertisement

Between orbit and office, Nvidia revealed partnerships spanning Adobe for creative AI, automakers like BYD and Nissan for Level 4 autonomous vehicles, a coalition with Mistral AI and seven other labs to build open frontier models, and Dynamo 1.0, an open-source inference operating system already adopted by AWS, Azure, Google Cloud, and a roster of AI-native companies including Cursor and Perplexity.

The pattern is unmistakable: Nvidia wants to be the computing platform — hardware, software, and models — for every AI workload, everywhere. The DGX Station is the piece that fills the gap between the cloud and the individual.

The cloud isn’t dead, but its monopoly on serious AI work is ending

For the past several years, the default assumption in AI has been that serious work requires cloud GPU instances — renting Nvidia hardware from AWS, Azure, or Google Cloud. That model works, but it carries real costs: data egress fees, latency, security exposure from sending proprietary data to third-party infrastructure, and the fundamental loss of control inherent in renting someone else’s computer.

The DGX Station doesn’t kill the cloud — Nvidia’s data center business dwarfs its desktop revenue and is accelerating. But it creates a credible local alternative for an important and growing category of workloads. Training a frontier model from scratch still demands thousands of GPUs in a warehouse. Fine-tuning a trillion-parameter open model on proprietary data? Running inference for an internal agent that processes sensitive documents? Prototyping before committing to cloud spend? A machine under your desk starts to look like the rational choice.

Advertisement

This is the strategic elegance of the product: it expands Nvidia’s addressable market into personal AI infrastructure while reinforcing the cloud business, because everything built locally is designed to scale up to Nvidia’s data center platforms. It’s not cloud versus desk. It’s cloud and desk, and Nvidia supplies both.

A supercomputer on every desk — and an agent that never sleeps on top of it

The PC revolution’s defining slogan was “a computer on every desk and in every home.” Four decades later, Nvidia is updating the premise with an uncomfortable escalation. The DGX Station puts genuine supercomputing power — the kind that ran national laboratories — beside a keyboard, and NemoClaw puts an autonomous AI agent on top of it that runs around the clock, writing code, calling tools, and completing tasks while its owner sleeps.

Whether that future is exhilarating or unsettling depends on your vantage point. But one thing is no longer debatable: the infrastructure required to build, run, and own frontier AI just moved from the server room to the desk drawer. And the company that sells nearly every serious AI chip on the planet just made sure it sells the desk drawer, too.

Source link

Advertisement
Continue Reading

Tech

NVIDIA’s Drive Hyperion System Gains Three New Allies in the Push for Cars That Handle Themselves

Published

on

NVIDIA Drive Hyperion BYD Isuzu Nissan Partners
GTC 2026 brought some news that caught a lot of people off guard. Three major automakers have signed on to work with NVIDIA to bring autonomous driving to their vehicles in defined conditions, and sooner than most would have expected. BYD, Nissan, and Isuzu are all on board, each bringing their own strengths to the table as the technology edges closer to becoming an everyday reality on public roads.”.



BYD is no stranger to pushing technology forward, and they plan to roll the system out across their next generation of models, the ones already turning heads on the road. Nissan is taking a broader approach, bringing it to their entire passenger vehicle lineup, while Isuzu is focused on the commercial side of things, teaming up with TIER IV to keep their buses running smoothly with minimal need for human supervision.”

Drive Hyperion is a full system that includes sensors, processing units, and software that are ready to use right out of the box. That means automakers don’t have to start from scratch; instead, they can take the parts that work and modify them to their own vehicles. It all adds up to L4 autonomy, in which the car does all of the driving in particular scenarios such as highways or mapped urban areas, eliminating the need for someone to be on high alert at all times.


Fourteen high-definition cameras provide a continuous 360-degree picture of everything around the automobile, while nine radar units monitor distances and speeds even in bad weather. One LiDAR scanner creates precise 3D images of the environment around the car, while twelve ultrasonic detectors handle short-range tasks such as parking and merging. At the center of it all are two computers powered by the latest NVIDIA chips, capable of handling over 2 trillion operations per second. And if one of them fails, there is a backup system in place to keep everything going smoothly.


Raw sensor data is fed directly into the computers, where software develops an understanding of the vehicle’s location and surroundings. Then separate parts of the system weigh the options by looking at what the cameras show, what the vehicle has done before, the planned route, and even what the navigation system says, and they have an open model called Alpamayo that shows how all of this works, tracing out every step of the decision-making process, making it easier for developers to refine things and ensure they’re doing the right thing.

Advertisement

In real life, engineers can test the system in a digital environment before installing it in a real car. They use real-world data to reproduce difficult or unique circumstances, which helps them detect issues that would otherwise arise years later. One of the most important aspects is ensuring that the system is safe, and to that end, they’ve created an operating system called Halos that puts a few layers of safety around the entire thing. It is designed to meet the strictest automobile standards and incorporates active monitoring, which acts as a constant safety net to prevent anything from going wrong. Users have already begun to put the platform into action. Ride sharing services are preparing to debut fleets of robotaxis and delivery cars in dozens of locations beginning in 2027.

Source link

Advertisement
Continue Reading

Tech

Someone gave the MacBook Neo the 1TB storage upgrade it never got from Apple

Published

on

Apple launched the $599 MacBook Neo on March 11, a budget Mac powered by the A18 Pro chip from the iPhone 16 Pro, 8GB of unified memory, and a 13-inch screen. Though it offers decent specifications for the price, there’s a catch: the storage tops out at 512GB. 

However, a Chinese repair technician, DirectorFeng, has swapped the default NAND chip for a 1TB chip, effectively unlocking the MacBook Neo’s storage. The technician has posted the entire video on a YouTube channel. 

How did DirectorFeng pull this off?

DirectorFeng replaced the NAND flash drive soldered to the MacBook’s logic board and then reflashed macOS, so it recognizes the third-party driver and storage. The process involved removing the original chip, cleaning the solder pads, and installing a higher-capacity replacement using professional repair tools. 

Advertisement

This wasn’t a screwdriver-and-YouTube-tutorial situation; this is microsurgery on a logic board, the kind that makes most people’s palms sweat. However, once reassembled, macOS recognized the larger-capacity NAND drive without firmware issues, and storage performance appeared normal as well. 

The storage, as seen in the video, goes up from 256GB to 994.61GB (marketed as 1TB). Once the process was complete, the replaced drive offered read and write speeds of 1,551 MB/s and 1,506MB/s, respectively. 

Should you try upgrading your MacBook Neo’s storage?

It’s worth noting that Apple uses soldered NAND rather than a removable SSD, which implies that any capacity change would require microsoldering and would almost certainly void the manufacturer’s warranty. However, the successful storage upgrade indicates that the Neo is relatively easier to work on than other MacBooks. 

Is this a consumer-friendly upgrade? No. Should you try upgrading your MacBook Neo’s storage yourself? Certainly not. The only key takeaway here is that the device works with third-party storage without any firmware issues. So, a storage upgrade, at least in theory, is possible. 

Advertisement

Source link

Continue Reading

Tech

Every Ham Shack Needs A Ham Clock

Published

on

Every ham radio shack needs a clock; ideally one with operator-friendly features like multiple time zones and more. [cburns42] found that most solutions relied too much on an internet connection for his liking, so in true hacker fashion he decided to make his own: the operator-oriented Ham Clock CYD.

A tabbed interface goes well with the touchscreen LCD.

The Ham Clock CYD is so named for being based on the Cheap Yellow Display (CYD), an economical ESP32-based color touchscreen LCD which provides most of the core functionality. The only extra hardware is a BME280 temperature and humidity sensor, and a battery-backed DS3231 RTC module, ensuring that accurate time is kept even when the device is otherwise powered off.

It displays a load of useful operator-oriented data on the touchscreen LCD, and even has a web-based configuration page for ease of use. While the Ham Clock is a standalone device that does not depend on internet access in order to function, it does have the ability to make the most of it if available. When it has internet access over the built-in WiFi, the display incorporates specialized amateur radio data including N0NBH solar forecasts and calculated VHF/HF band conditions alongside standard meteorological data.

The CYD, sensor, and RTC are very affordable pieces of hardware which makes this clock an extremely economical build. Check out the GitHub repository for everything you’ll need to make your own, and maybe even put your own spin on it with a custom enclosure. On the other hand, if you prefer your radio-themed clocks more on the minimalist side, this Morse code clock might be right up your alley.

Advertisement

Source link

Advertisement
Continue Reading

Tech

What is the release date for Scrubs season 10 episode 5 on Hulu and Disney+?

Published

on

It’s hard to believe that Scrubs season 10 will hit its halfway point with its next episode. By all accounts, the hospital-set sitcom is performing pretty well, so it begs the question why more entries weren’t greenlit.

But that’s a debate for another day. Right now, you’re here to find out when season 10’s fifth episode, titled ‘My Angel’, will premiere on some of the world’s best streaming services. Don’t delay, then — read on for more details!

Advertisement

Source link

Continue Reading

Tech

NVIDIA announces DLSS 5 with photorealistic lighting to change the future of gaming

Published

on

At its GTC 2026 event, NVIDIA has officially announced DLSS 5, a new version of its Deep Learning Super Sampling technology. The next generation of its AI-powered graphics technology introduces neural rendering techniques designed to create more realistic lighting and materials in games. The feature is expected to launch later this year.

DLSS has long been used to upscale lower-resolution frames into higher-resolution images using AI, boosting performance while maintaining visual quality, with DLSS 4.5 being the most recent update. The new version takes that concept further by using neural networks to assist with parts of the rendering pipeline itself, rather than simply reconstructing pixels.

What’s new in DLSS 5?

The biggest shift with DLSS 5 is the introduction of neural rendering, a technique where AI helps generate elements of a scene, such as lighting, materials, and surface detail, rather than relying entirely on traditional rendering methods. The system can produce photorealistic lighting effects and more accurate material reflections, potentially improving realism in ray-traced environments while maintaining high frame rates.

The technology builds on earlier DLSS features like Super Resolution, Ray Reconstruction, and Frame Generation, but moves further toward an AI-assisted graphics pipeline where neural networks play a bigger role in how scenes are constructed.

Advertisement

Which hardware will support DLSS 5?

NVIDIA hasn’t officially confirmed which GPU architectures will support DLSS 5 yet, but the company has said the technology will arrive alongside RTX 50-series GPUs later this year. According to Digital Foundry, NVIDIA described the lighting improvements shown in its demo as “transformational,” with the feature expected to roll out around Fall 2026.

Interestingly, the demo setup used to showcase DLSS 5 wasn’t running on a typical gaming PC. Digital Foundry reports that NVIDIA used two GeForce RTX 5090 GPUs: one dedicated to running the game itself, while the second handled the DLSS 5 neural-rendering workload. This setup is currently required because the technology still needs significant optimization, particularly in terms of performance efficiency and VRAM usage.

That said, NVIDIA says DLSS 5 is ultimately designed to run on a single GPU, and that’s how it’s expected to ship when the technology launches publicly later this year.

Advertisement

Source link

Continue Reading

Tech

OnePlus’ upcoming budget phone will raise the bar for Apple and Samsung mid-rangers

Published

on

OnePlus could soon launch a budget phone that could seriously endanger the so-called feature-packed mid-ranger from other brands. The company has already kicked off the handset’s teaser campaign in India, dropping cryptic visuals of a silhouette smartphone, alongside a tagline that reads “Entering the Nord era soon.”

The handset, purported to be Nord 6, could put other mid-rangers to shame, or at least that is what the leaked specifications suggest. Before we talk about the hardware upgrades, it’s important to note that the Nord 6 is believed to be a rebranded version of the OnePlus Turbo 6, which is available only in China. 

So, what’s actually inside the upcoming OnePlus midranger?

That said, let’s tackle all the leaked Nord 6’s specifications one by one. First of all, the upcoming smartphone could sport a 9,000 mAh battery that supports 80W wired charging. 

Currently, the OnePlus 15R holds the crown for the company’s biggest battery smartphone, but it might not hold that position for much longer. 

Advertisement

To give you some perspective, 9,000 mAh is almost as big as the combined battery capacity of the new Galaxy S26 Ultra and the iPhone 17 Pro Max

The Nord 6’s battery could, in a very real way, provide over 12 to 14 hours of screen-on time between charges, making it a two-day battery phone for most users. 

Many of you have been asking about this..

OnePlus Nord 6 is launching soon..

Same specs as Turbo 6

Advertisement

price going up, see you early April..

— Yogesh Brar (@heyitsyogesh) March 14, 2026

Upgraded specs could result in a serious price jump

On the performance front, the handset will offer a serious jump, thanks to the Snapdragon 8s Gen 4 (4nm) chipset. Its GPU is powerful enough to support high frame rate gaming.

For capturing pictures, the smartphone could come with a 50MP primary camera with optical image stabilization and a 16MP selfie shooter. Finally, the OnePlus Nord 6 could also feature a 6.78-inch 1.5K AMOLED screen that supports a refresh rate of up to 165Hz. 

Advertisement

However, all the upgraded specifications could result in a serious bump in the phone’s price. The Nord 6’s price tag could be around $500 in India, where it is confirmed to launch in early April. A United States launch is still under question, though. 

Source link

Advertisement
Continue Reading

Tech

What was the ‘lightbulb moment’ for this senior software engineer?

Published

on

Workhuman’s Ciara Walsh discusses career development and her advice to others looking to take a similar professional route.

“Growing up, I was always interested in science and engineering, so I knew I would end up in some kind of STEM-related field, but I had quite a difficult time figuring out which direction to go in when approaching my career initially,” said senior software engineer at Workhuman, Ciara Walsh. 

Encouraged to build computing skills from a young age, she joined a local CoderDojo, which is a community-based coding club, where she helped the younger children with basic computer skills and later taught her own classes. From there, she realised that she could have a future in software.

“The connection that this could be my career eventually came through my late grandmother, who suggested it one afternoon while I was struggling with my CAO application. That conversation was a lightbulb moment for me and my whole career journey has followed from it.”

Advertisement
What do you enjoy most about your job?

I really enjoy problem solving and having to really think about how to approach solutions. Software engineering is essentially problem solving as a career in many ways, whether that’s figuring out how to build a new feature for users or triaging why a test is failing. At the core of what I do every day involves figuring out a way forward on some combination of puzzle or problem. For me, that’s really satisfying, and I love getting to the ‘aha’ moment at the end where it all works. 

What’s the most exciting development you’ve witnessed in your sector?

I remember a meeting very early in my career which was centred on the ‘internet of things’ and how connected devices were going to change everything about daily life within the next 10 to 15 years. The conversation at that time was around how ambitious of an idea it was, and how many technologies and tools would need to be invented to even achieve a quarter of the concepts being laid out at that stage. It’s been fascinating to be part of the industry since then and see many of the ideas that were being discussed in that meeting come to life within the real world.

The sheer number of technologies that we use daily now which simply didn’t exist when I started my career is amazing. It’s exciting to be part of a sector that moves this quickly, and I’m looking forward to seeing what the next 10 to 15 years brings us.

What’s been the hardest thing you’ve had to face in your career and how was it overcome?

The hardest thing I’ve had to face so far was the decision to step away from my career for a year, without knowing what came next.

Advertisement

In 2024, I decided to return to college and study for a master’s degree in electronic engineering. At that stage, the industry had slowed down quite a lot in terms of companies hiring, so stepping away from a job where I had a reasonable level of security was a big risk. However, I also felt that I needed to take that step back and spend time growing my knowledge and skills to be successful moving forward, especially given the direction that the industry has moved in, with AI and machine learning, so I took the risk.

During the course, I tried to ensure that I kept a balance between new topics I wanted to learn and those that I had some knowledge of but in which I could develop further depth, and this was of huge benefit to me because I managed to avoid losing my existing skills in the process of gaining new ones.

Having said that, the imposter syndrome and stress associated with that journey – particularly during the later stages, when my course had finished and I was trying to restart my career – wasn’t something I anticipated. I found it significantly more challenging than I expected and even after joining my current role it took some time to have full confidence in myself again. Looking back on it now though, I think the risk paid off, as I have a more solid understanding of some key concepts and – maybe more importantly – a stronger set of research skills, which will be useful going forward in my career.

If you had the power to change anything within the STEM sector, what would that be?

STEM is a very broad sector, so it’s hard to outline any specific things that I’d change across it all, but I think something I’d like to see celebrated and emphasised more is how creative many of the fields under the STEM umbrella are.

Advertisement

We tend to focus a lot on being data-driven and efficient, but the reality is that the majority of the work we do in STEM involves some kind of inventing and/or creative thinking. I think sometimes we lose sight of that amongst the deadlines and client requests, and we don’t leave ourselves enough space to be innovative and to really explore the crafts hidden in behind the science and technology of it all. If I could change anything, it would be that we gave ourselves more space and time to be purely creative, rather than always doing the most efficient thing.

Hackathons are a great example of this, where time is given to just experiment and explore with the tools of the trade. I’ve been involved with multiple hackathon projects that ended up being deployed as full products after some polishing. Those only exist because the team members were given the space to think and explore outside the structure of the usual day-to-day.

How do you make connections with others in the STEM community? 

I have been incredibly fortunate in my career so far when it comes to mentors and mentoring in general. I was a recipient of a women in technology scholarship during my undergraduate degree, which provided me with some amazing mentors from the very beginning. Their advice and guidance have stood the test of time at this stage, and I genuinely think I’m a better engineer because of all the people who’ve worked with me along my career path so far.

I’ve continued to benefit from mentoring of many different forms throughout my career, and had the opportunity to mentor some people myself, which I think was equally beneficial to me. Mentoring others gives you so many opportunities to really explore your own growth, and for me it has also often resulted in development of my own in parallel to my mentees.

Advertisement
What advice would you give to someone thinking about a career in your area?

I think the best advice I could give someone looking to go into software engineering as a career is to just start coding and experimenting with building simple programs. Start with something like Scratch so you get to learn the basic logic patterns, and then experiment with other languages and tools as you get comfortable. There are lots of free resources and tutorials online, and you can actually learn all the technical skills you need to know to do this job using them. I still use some of them when I need to learn something new for my role.

The other advice I would give someone considering this career is that software is always changing, and there are always new frameworks and tools to learn. To be a successful software engineer, you need to be willing to learn new things across your whole career. This can be challenging at times, but once you learn the general basics, it’s a lot easier than you might expect to transfer skills.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025