Connect with us
DAPA Banner

Tech

Quadratic Gravity Theory Reshapes Quantum View of Big Bang

Published

on

Researchers at the University of Waterloo say a new “quadratic quantum gravity” framework could explain the universe’s rapid early expansion without adding extra ingredients to Einstein’s theory by hand. The idea is especially notable because it makes testable predictions, including a minimum level of primordial gravitational waves that future experiments may be able to detect. “Even though this model deals with incredibly high energies, it leads to clear predictions that today’s experiments can actually look for,” said Dr. Niayesh Afshordi, professor of physics and astronomy at the University of Waterloo and Perimeter Institute (PI). “That direct link between quantum gravity and real data is rare and exciting.” Phys.org reports: The research team found that the Big Bang’s rapid early expansion can emerge naturally from this simple, consistent theory of quantum gravity, without adding any extra ingredients. This early burst of expansion, often called inflation, is a central idea in modern cosmology because it explains why the universe looks the way it does today.

Their model also predicts a minimum amount of primordial gravitational waves, which are tiny ripples in spacetime geometry created in the first moments after the Big Bang. These signals may be detectable in upcoming experiments, offering a rare chance to test ideas about the universe’s quantum origins.

[…] The team plans to refine their predictions for upcoming experiments to explore how their framework connects to particle physics and other puzzles about the early universe. Their long-term goal is to strengthen the bridge between quantum gravity and observational cosmology. The research has been published in the journal Physical Review Letters.

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Ollama is supercharged by MLX's unified memory use on Apple Silicon

Published

on

Machine learning researchers using Ollama will enjoy a speed boost to LLM processing, as the open-source tool now uses MLX on Apple Silicon to fully take advantage of unified memory.

Open laptop on a desk displaying a black screen with a white line drawing of a cute alpaca character standing beside a sleek sports car, against a soft orange background.
Ollama has been boosted by MLX on Apple Silicon

Anyone working with large language models (LLMs) wants results as quickly as possible. There are techniques to do this using multiple Macs, working in a cluster to increase the amount of processing at hand, but one method made by Apple also provides an extra bit of assistance.
This has been undertaken by the developers working on the open-source model management and execution tool Ollama. In a March 30 update, it announced that it is previewing a version of the tool for Apple Silicon that takes advantage of MLX.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

5 Classic Muscle Cars That Make The Pontiac GTO Look Slow

Published

on





If you were to say the term American muscle car, a Pontiac GTO will certainly spring to a lot of people’s minds, and for good reason. Originally a trim level of the 1964 Pontiac Tempest, the GTO nameplate, which stands for Gran Turismo Omologato (Grand Touring Homologation in Italian) became synonymous with big power in a modest package. Arguably, it started the whole muscle car trend, debuting before giants like the Mustang, Charger, and more. It had the muscle to back it up as well, with later examples boasting either a 400 or 455 cubic inch engine in top trim, with various options such as the famous Ram Air intake, characterized by its hood scoop.

Power figures are impressive for the time, boasting 360 hp and 500 lb-ft torque with the 455 big block, or 370 hp and 445 lb-ft torque with the Ram Air 400 in 1970. But how fast was it, really, in comparison to its peers? It’s hard to say in pure mathematical terms because of the variables; different magazines and journals list varying times, ranging from 14.6 at 99.6 mph to 13.6 at 104.5 mph with the 400 Ram Air and manual, the fastest configuration. The 455 was slower still, dropping down to 15 seconds.

Quite a few cars could certainly hang with the GTO, and more still could exceed it. For this article, we’ll take a look at the original GTO’s fastest year of 1970 and measure it against all muscle cars built up to that point, so nothing post-1970, and no special models like the Super Stock Hursts or Yenkos — these are common production cars only. Let’s kick it off.

Advertisement

1970 Dodge Challenger R/T 440 Six Pack: 13.6 @ 105 mph

Our opening car already matches the GTO’s best recorded time, and beats the 455 by over a second at the line, a massive length in drag racing terms: 1970 was the debut year of the Dodge Challenger. Made famous by its starring role in the hit movie “Vanishing Point,” the 1970 Dodge Challenger, in this case a 400 Six Pack-equipped R/T trim, is one of the most iconic muscle cars ever made, though its status is somewhat deceptive; Challengers are actually pony cars. 

Advertisement

Pony cars are smaller vehicles, in this case built on the Chrysler E-body, a crucial point when talking about power/weight ratio. This 1970 Dodge Challenger R/T houses the same engine as the midsize and full-size muscle cars, but the ’69 Charger R/T 440 weighing 3,900 pounds, whereas the ’70 Challenger comes in at 3,395 pounds. With less weight and a smaller profile to move through the air, the Challenger will naturally be the faster of the two body styles, and certainly as fast or faster than the GTO.

The 440 Six Pack does all the heavy lifting here, of course, boasting 375 hp and 480 lb-ft torque. Much like the 455, these were large, powerful engines designed for cruising; Winnebago motorhomes used these engines, for example, albeit with different accessories and tunes. When you take that engine, give it three carbs and some performance upgrades, and shove it into a car the size of a Challenger, it’s no wonder they exceed the GTO’s figures.

Advertisement

1970 Ford Mustang BOSS 429: 13.6 @ 114 mph

Here’s another car with a bit of a spotty drag racing record; Motor Trend actually tested their own ’69 Boss 429 and got a blistering 12.3-second quarter mile time at 112 mph. For these purposes, let’s use the worst-case scenario — a 1970 model year, same engine, running a 13.6 at 114. And that shouldn’t be all too surprising, because here we have an example of another smaller car with a massive engine shoved under the hood.

Originally, the Mustang didn’t even have a big block at all; the Mustang is one of the progenitors of the term pony car — smaller, more nimble cars with small block V8s like the 289 Ford or 350 Chevy in the Camaro. That changed in 1967, when Ford introduced the 390 option for the Mustang. The company continuously experimented with the design over the next couple of years, and while the 1970 car shares the same basic architecture as the original, their bodies are very different — as are their engines.

The 429 cubic-inch big block is, ostensibly, a racing engine. In fact, the Boss 429 itself was designed to compete in NASCAR. A little-known fact is that the 429 actually uses a hemispherical combustion chamber, like the legendary 426 HEMI engine from Mopar; this configuration allows for extremely efficient combustion processes, especially at rev ranges expected in racing, making this engine particularly well-suited to high-speed runs. It’s believed that Ford underrated the engine at 375 hp and 450 lb-ft torque, which — coupled with the Mustang’s slim profile — made for an extremely potent muscle car.

Advertisement

1970 Buick GS / GSX Stage 1: 13.38 @ 105.5 mph

With 360 hp and a whopping 510 lb-ft torque, the 1970 Buick GS Stage 1 rips up a quarter-mile track at 13.38 seconds, according to Motor Trend’s January 1970 issue. This one’s a bit conflated, however; the same car was also tested over at Hot Rod Magazine in November 1969, reaching the finish line after 14.40 seconds at 96 mph, albeit with the automatic. Being that the manual is faster for both the GTO and GS Stage 1, we’ll use those times instead for consistency.

Most people probably don’t say Buick and high-performance in the same breath anymore, but that wasn’t true in 1970. In fact, the GS Stage 1 was one of the fastest muscle cars on the market — and much like the previous entry, the GS and GSX special-edition were midsize, built on the same A-body platform as the GTO, Chevelle, Olds 4-4-2, and so on. The fastest Olds 4-4-2, a 1966 model with the W-30 and manual accomplished a 13.8-second time, making it about on-par with the GTO. This makes the Buick GS the second-fastest GM-platform in the quarter-mile run.

Advertisement

The engine used by the GS Stage 1 was the 455 cubic inch (7.4-liter) big block, among the most powerful Buick engines ever produced, and it was also Buick’s biggest ever V8 fitted to a production car. While it doesn’t have the same power rating as some others on this list, that engine more than makes up for it in raw torque, especially with the close-ratio Muncie 4-speed it was often paired with.

Advertisement

1970 Chevrolet Chevelle SS 454: 13.12 @ 107.01 mph

Arguably the first true mid-size car on this list, the 1970 Chevrolet Chevelle SS 454 is yet another iconic muscle car, wearing its classic Le Mans racing stripes and SS badging. Moreover, the LS6 engine option code bumped up power to 450 horsepower and 500 lb-ft torque, more power than anything else on this list, at least in terms of factory ratings. This produced rapid times frequently teasing the low-13 second mark, with Hot Rod attaining a respectable 13.44 @ 108.17 mph in their best run, for instance.

The 1970 Chevelle SS came in several different variants, each with their own power and top speed figures, ranging from the entry-level L34-code 396 ci unit with 350 hp, up to the infamous LS6. LS6-powered Chevelles are sometimes referred to today as the king of muscle cars, directly competing with the likes of the infamous 426 HEMI, the 428 Cobra Jet, and more. Much like the 429, the LS6 was a bespoke high-performance engine, sporting an 11.25:1 compression ratio, aggressive solid-lifter camshaft, aluminum pistons, and more, topped off with a thirsty 800 cfm (cubic feet per minute) Holley carburetor. All that runs through a Muncie M-22 Rock Crusher transmission.

In short, the LS6-powered Chevelle is the 1970 equivalent of a supercar today, though it’s decidedly less refined than one. According to Motor Trend, the transmission is noisy and unrefined, the engine unhappy on unleaded gasoline due to its high compression ratio, and it’s almost impossible to drive hard without spinning tires if you’re running regular street rubber. It’s decidedly specialized for one purpose — going fast, and it does that very well, indeed.

Advertisement

1970 Plymouth Barracuda 426 Hemi: 13.10 @ 107.1 mph

It should come as no surprise that the top spot is secured by a Hemi, an engine that needs no introduction to drag racing enthusiasts. In truth, the infamous Elephant Block could likely accommodate several spots on this list, but the fastest among them, at least according to Car Craft magazine, is the 1970 Plymouth Hemi ‘Cuda. Much like Ford’s 429, the 426 Street Hemi is widely rumored to have a significantly underreported horsepower rating throughout its production run — an impressive 425 hp and 490 lb/ft torque, so says Chrysler.

This was a massive, racing-oriented engine that just barely fit in a lot of these cars; getting hemi heads on the block required a lot of real-estate, one reason why you don’t see them too often. The option itself cost an eye-watering $900, or over $7,500 today — basically you have to buy a third of the car over again at the dealership. But what you get is, for all intents and purposes, the closest thing to a factory-built racecar without crossing the line into specialist vehicles. The A-body Barracuda was built with this in mind, being an early example of a hero sports car alongside its sister Dodge Challenger.

To put it into perspective, the already (supposedly) underrated 426 Hemi can launch the infamous Hurst Hemi ’68 Barracuda deep into 10-second times at over 120 mph. That same engine, albeit tuned for street use, propels the 1970 Hemi ‘Cuda over a second and a half faster than the GTO down the strip. With its light weight and massive performance, it’s simply no contest for the Pontiac at this stage.

Advertisement



Source link

Advertisement
Continue Reading

Tech

Facial Recognition Is Spreading Everywhere

Published

on

Facial recognition technology (FRT) dates back 60 years. Just over a decade ago, deep-learning methods tipped the technology into more useful—and menacing—territory. Now, retailers, your neighbors, and law enforcement are all storing your face and building up a fragmentary photo album of your life.

Yet the story those photos can tell inevitably has errors. FRT makers, like those of any diagnostic technology, must balance two types of errors: false positives and false negatives. There are three possible outcomes.

Three Possible Outcomes

White figures and an orange hooded figure, focusing on the hooded figure in a split design.a) identifies the suspect, since the two images are of the same person, according to the software. Success!

Abstract figures: orange hoodie enlarged, white, yellow, and orange on left, black background.b) matches another person in the footage with the suspect’s probe image. A false positive, coupled with sloppy verification, could put the wrong person behind bars and lets the real criminal escape justice.

Three white icons and one orange hoodie icon on left, large orange hoodie icon on right.c) fails to find a match at all. The suspect may be evading cameras, but if cameras just have low-light or bad-angle images, this creates a false negative. This type of error might let a suspect off and raise the cost of the manhunt.

In best-case scenarios—such as comparing someone’s passport photo to a photo taken by a border agent—false-negative rates are around two in 1,000 and false positives are less than one in 1 million.

In the rare event you’re one of those false negatives, a border agent might ask you to show your passport and take a second look at your face. But as people ask more of the technology, more ambitious applications could lead to more catastrophic errors. Let’s say that police are searching for a suspect, and they’re comparing an image taken with a security camera with a previous “mug shot” of the suspect.

Advertisement

Training-data composition, differences in how sensors detect faces, and intrinsic differences between groups, such as age, all affect an algorithm’s performance. The United Kingdom estimated that its FRT exposed some groups, such as women and darker-skinned people, to risks of misidentification as high as two orders of magnitude greater than it did to others.

Five faces arranged left to right, from easy to hard to recognize.Less clear photographs are harder for FRT to process.iStock

What happens with photos of people who aren’t cooperating, or vendors that train algorithms on biased datasets, or field agents who demand a swift match from a huge dataset? Here, things get murky.

Facial Recognition Gone Wrong

THE NEGATIVES OF FALSE POSITIVES

Detroit Police SUV with American flag decal on side under bright sunlight.2020: Robert Williams’s wrongful arrest cost him detention. The ensuing settlement requires Detroit police to enact policies that recognize FRT’s limits. iStock

ALGORITHMIC BIAS

Advertisement

Red sign reads 2023: Court bans Rite Aid from using facial recognition for five years over its use of a racially biased algorithm. iStock

TOO FAST, TOO FURIOUS?

Back of ICE officer in tactical gear facing a house.2026: U.S. immigration agents misidentify a woman they’d detained as two different women. VICTOR J. BLUE/BLOOMBERG/GETTY IMAGES

Consider a busy trade fair using FRT to check attendees against a database, or gallery, of images of the 10,000 registrants, for example. Even at 99.9 percent accuracy you’ll get about a dozen false positives or negatives, which may be worth the trade-off to the fair organizers. But if police start using something like that across a city of 1 million people, the number of potential victims of mistaken identity rises, as do the stakes.

What if we ask FRT to tell us if the government has ever recorded and stored an image of a given person? That’s what U.S. Immigration and Customs Enforcement agents have done since June 2025, using the Mobile Fortify app. The agency conducted more than 100,000 FRT searches in the first six months. The size of the potential gallery is at least 1.2 billion images.

At that size, assuming even best-case images, the system is likely to return around 1 million false matches, but at a rate at least 10 times as high for darker-skinned people, depending on the subgroup.

Advertisement

Responsible use of this powerful technology would involve independent identity checks, multiple sources of data, and a clear understanding of the error thresholds, says computer scientist Erik Learned-Miller of the University of Massachusetts Amherst: “The care we take in deploying such systems should be proportional to the stakes.”

Source link

Continue Reading

Tech

How to back up your iPhone & iPad to your Mac before something goes wrong

Published

on

Backing up your iPhone or iPad to your Mac is the fastest and most reliable way to protect your data, and is especially useful before updates, repairs, or device replacement.

Apple iPhone and iPad on a colorful blurred background, with a large Time Machine backup app icon centered between them
How to back up your iPhone and iPad to Mac

Backing up your iPhone or iPad to your Mac remains the fastest and most complete way to protect your data before updates, repairs, or hardware changes. Apple built local backup support directly into macOS through Finder, allowing full-device backups without relying on an internet connection.
Local backups are like full system snapshots, saving your device settings, messages, app data, and media stored on your device. Backing up to iCloud does save your data, but restoring from a Mac is also faster than from because the data transfers directly over USB.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

EE TV is using AI to help you find something to watch

Published

on

EE is taking aim at one of streaming’s biggest annoyances, endlessly scrolling for something to watch.

The company has launched Smart Search, a new AI-powered feature on EE TV. It lets users find content simply by describing what they’re in the mood for.

Instead of typing exact titles, Smart Search understands more natural queries like “a funny detective show” or even a quote from a scene. It then pulls results from across live TV, on-demand services and integrated streaming apps. As a result, it presents everything in one place.

The idea is simple: less app-hopping, more watching.

Advertisement

Alongside it, EE is introducing Mood Matcher, another AI-driven tool designed to tackle the “what should we watch?” problem. Users answer a few quick prompts about mood, genre or themes. The system then serves up tailored recommendations. This is something EE says is particularly useful when multiple people are trying to agree on what to watch.

Advertisement

The launch leans heavily on a very real problem. EE’s own research suggests 41% of viewers struggle to discover new content, while 45% fall back on rewatching shows just to avoid the effort of choosing. Perhaps more tellingly, 38% say deciding what to watch causes household tension which probably sounds familiar.

EE TV Box Pro designEE TV Box Pro design
Image Credit (Trusted Reviews)

There’s also the issue of fragmentation. With 42% of users relying on their phones to find content and 61% wanting a more unified viewing experience, EE is positioning Smart Search as a way to bring everything together into a single interface.

That broader shift, making discovery as important as the content itself, is becoming a key battleground for TV platforms. As analyst Paolo Pescatore puts it, speed and simplicity are now just as critical as having a deep catalogue.

Advertisement

Smart Search and Mood Matcher are available now through the EE TV app on compatible devices. A wider rollout is planned for EE TV Pro and EE TV Box Edge hardware in the near future.

For EE, the pitch is clear: stop searching like a database, and start searching like a human.

Advertisement

Source link

Advertisement
Continue Reading

Tech

What Are The Biggest Limitations Of Supercomputers?

Published

on





Supercomputers are built to solve very large, difficult problems and do it quickly. Instead of relying on a single processor, supercomputers like El Capitan at Lawrence Livermore National Laboratory and Frontier at Oak Ridge National Laboratory use a large number of processors working together simultaneously. That makes them especially useful for jobs like climate modeling, genetic research, nuclear simulations, artificial intelligence, and identifying flaws in jet engine design.

We’re not talking about quantum computers here, though. A supercomputer is still a classical computer: it uses ordinary bits, which are either 0 or 1, and it solves problems by doing massive numbers of conventional calculations very quickly. A quantum computer works differently by using quantum bits, or qubits. Quantum computing is still largely in the experimental and early developmental stage. Right now, the real work is being done by classical supercomputers, helping scientists explore problems that would take ordinary computers far too long to solve. Some of today’s fastest machines can perform more than a billion calculations per second.

Even so, supercomputers are not all-powerful. Their biggest limitations usually come down to four things: workload scaling, data transfer issues, power consumption, and reliability. Engineers are making progress on all four, but none of these problems has disappeared.

Advertisement

Supercomputers work best when they can break tasks into chunks

One of the biggest limitations is that supercomputers are only useful for certain kinds of tasks. They are best at problems that can be broken into many smaller pieces and worked on concurrently. This is known as parallel processing; for example, a climate model can split the atmosphere and oceans into many sections and calculate each one in parallel. But some problems do not work that way. Some tasks have steps that must happen sequentially. When that happens, a supercomputer cannot speed things up very much. If part of a job has to wait for another task to be finished, the whole system slows down. The answer here often isn’t to add more hardware. Instead, it’s to redesign the software so more of the work can happen simultaneously. 

Another major limitation involves the process of moving data around. A supercomputer may be able to calculate incredibly quickly, but it still needs to fetch information from memory. In many cases, the machine is not limited by calculation speed, but by the time it takes to move data from one place to another. To mitigate this challenge, supercomputers store data physically closer to the processors to move it more efficiently. Researchers are also redesigning programs to reuse data more effectively instead of constantly fetching it.

Advertisement

Supercomputers use a lot of power and have a lot of parts that can go wrong

Power use is also a huge limitation. The fastest supercomputers use enormous amounts of electricity. They also need advanced cooling systems to prevent overheating. This creates two problems. First, it makes supercomputers very expensive to run. Second, it raises environmental concerns, especially as people push back on the large data centers needed to house them. Building better supercomputers will depend not only on making them more powerful, but also on making them more energy-efficient.

Another problem is reliability. A supercomputer contains an enormous number of parts: processors, memory units, cables, storage systems, cooling equipment, and more. The more parts a machine has, the more chances there are for something to go wrong. A loose cable, faulty memory chip, or cooling issue can interrupt a major calculation. This matters because some scientific jobs run for hours or days. If something fails midway through, that work may need to be restarted or recovered from a saved checkpoint. Engineers employ tools like the Lawrence Livermore National Laboratory’s Scalable Checkpoint/Restart (SCR) to minimize the amount of work lost when an issue occurs, but there’s no way to fully prevent hardware issues from occurring. After all, building a massive machine also means there are a massive number of things that can break.

Advertisement



Source link

Advertisement
Continue Reading

Tech

Daily Deal: StackSkills Premium Annual Pass

Published

on

StackSkills Premium is your destination for mastering today’s most in-demand skills wherever and whenever your schedule allows. Now, with this exclusive limited-time offer, you’ll gain access to 1000+ StackSkills courses for just one low annual fee! Whether you’re looking to earn a promotion, make a career change, or pick up a side hustle to make some extra cash, StackSkills delivers engaging online courses featuring the skills that matter most today. From blockchain to growth hacking to iOS development, StackSkills stays ahead of the hottest trends to offer the most relevant courses and up-to-date information. Best of all, StackSkills’ elite instructors are experts in their fields and are passionate about sharing learnings based on first-hand successes and failures. If you’re ready to commit to your personal and career growth, you won’t want to pass on this incredible all access pass to the web’s top online courses. It’s on sale for $60.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Source link

Continue Reading

Tech

Flipsnack and the shift toward motion-first business content with living visuals

Published

on

Interactive content now generates 52.6% higher engagement than static formats, with users spending significantly longer interacting with dynamic media and showing higher recall for brands that use it. In practical terms, that shift may have transformed expectations around how digital content should be produced, especially in commerce and B2B environments, where attention is often a […]

This story continues at The Next Web

Source link

Advertisement
Continue Reading

Tech

See The Computers That Powered The Voyager Space Program

Published

on

Have you ever wanted to see the computers behind the first (and for now only) man-made objects to leave the heliosphere? [Gary Friedman] shows us, with an archived tour of JPL building 230 in the ’80s.

A NASA employee picks up a camcorder and decides to record a tour of the place “before they replace it all with mainframes”. They show us computers that would seem prehistoric compared to anything modern; early Univac and IBM machines whose power is outmatched today by even an ESP32, yet made the Voyager program possible all the way back in 1977. There are countless peripherals to see, from punch card writers to Univac debug panels where you can see the registers, and from impressive cabinets full of computing hardware to the zip-tied hacks “attaching” a small box they call the “NIU”, dangling off the inner wall of the cabinet. And don’t forget the tape drives that are as tall as a refrigerator!

We could go on ad nauseum, nerding out about the computing history, but why don’t you see it for yourself in the video after the break?

Advertisement


Thanks to [Michael] for the tip!

Advertisement

Source link

Continue Reading

Tech

Invences Provides Smart Telecom Networks to Small Firms

Published

on

To stay competitive, many small businesses need advanced wireless communication networks, not only to communicate but also to leverage technologies such as artificial intelligence, the Internet of Things, and robotics. Often, however, the businesses lack the technical expertise needed to install, configure, and maintain the systems.

Bhaskara Rallabandi, who spent more than two decades working for major telecom companies, decided to use his expertise to help small businesses. Rallabandi, an IEEE senior member, is an expert certified by the International Council on Systems Engineering.

Invences

Cofounder

Advertisement

Bhaskara Rallabandi

Founded

2023

Headquarters

Advertisement

Frisco, Texas

Employees

100

In 2023 he helped found Invences, a telecommunications automation company headquartered in Frisco, Texas.

Advertisement

Invences services include designing, building, and installing data centers, as well as cost-effective and secure wireless, private, IoT, and virtual communications networks.

The company has set up systems for farms, factories, and universities in rural and urban areas including underserved communities. Its mission, Rallabandi says, is to “build autonomous, ethical, and sustainable networks that connect communities intelligently.”

For his work, he was recognized last year for “entrepreneurial leadership in founding and scaling a U.S.-based technology company, advancing innovation in 5G/6G and Open RAN [radio access network], shaping global standards, and inspiring future leaders through mentorship and community impact” with the IEEE-USA Entrepreneur Achievement Award for Leadership in Entrepreneurial Spirit.

Building a telecommunications career

He began his telecommunications career in 2009 as a manager and principal network engineer at Verizon’s Innovation Labs in Waltham, Mass. He and his team ran some of the earliest long-term evolution and evolved packet core performance trials. (LTE is the 4G wireless broadband standard for mobile devices. EPC is the IP-based, high-performance core network architecture for 4G LTE networks.)

Advertisement

That work at Innovation Labs, he says, was key to the development of the first 4G systems. It set the stage for scalable, interoperable broadband architectures that underpin today’s 5G and 6G designs.

“We built the first bridge between legacy and cloud-native networks,” he says.

He left in 2011 to join AT&T Labs in Redmond, Wash. As senior manager and principal solutions architect, he oversaw the design, integration, and testing of the company’s next-generation wireless systems. He also led projects that redefined automation of networks and set up cloud computing systems including FirstNet, the nationwide broadband network for first responders, and VoLTE, the first voice-over-video LTE for conducting video calls.

In 2018 Rallabandi was hired as a principal and a senior manager of engineering at Samsung Networks Division’s Technology Solutions Division, in Plano, Texas. He led the development of 5G virtualization and Open RAN initiatives, which enable more flexible, scalable, and efficient large network deployments and interoperability among vendors.

Advertisement

Designing networks for small businesses

Feeling that he wasn’t reaching his full potential in the corporate world, and to help small businesses, he opted to start his own venture in 2023 with his wife, Lakshmi Rallabandi, a computer science engineer. She is Invences’s CEO, and he is its founding principal and chief technology advisor.

Invences, which is self-funded and employs about 100 people, has more than 50 customers from around the world.

“I wanted to do something more interesting where I could use the knowledge I gained working for these big companies to fill the gaps they overlooked in terms of automation” for small businesses, he says. “I have a team of people who, combined, have 200 years of technology experience.”

The startup builds networks that simplify its clients’ operations and reduce their costs, he says.

Advertisement

Instead of duplicating how major telecom carriers build networks for dense urban areas, he says, his designs reimagine the network architecture to lower its complexity, costs, and operational overhead.

“Connectivity should not be a luxury. Rural communities deserve an infrastructure that fits their needs.”

The systems integrate new technologies such as Open RAN, virtualized RAN, digital twins, telemetry, and advanced analytics. Some networks also incorporate agentic AI, an autonomous system that runs independently of humans and uses AI agents that plan and act across the network. Digital twins evaluate the agent’s decisions before releasing them.

“Autonomy is not about removing humans from the loop,” Rallabandi says. “It is about giving systems the ability to manage complexity so humans can focus on intent and outcomes.”

Advertisement

Rallabandi also has worked on AI-driven telecom observability technologies designed to allow networks to detect anomalies and optimize performance automatically.

He has developed a virtual O-RAN innovation lab, where clients can test the interoperability of their 5G systems, try out their enhancements, run trials of future functions, and experiment with updates.

Invences partnered with Trilogy Networks to build the FarmGrid platform for farms in Fargo, N.D., and Yuma, Ariz. FarmGrid used private 5G networks, edge-computing AI, and digital twins to make the operations more efficient.

“The project connects farms with sensors, analytics platforms, and autonomous equipment to enable precision agriculture, water optimization, and real-time decision-making,” Rallabandi says.

Advertisement

IEEE Senior Member Bhaskara Rallabandi talks about partnering with Trilogy Networks to build the FarmGrid platform for farms in Fargo, N.D., and Yuma, Ariz.TECKNEXUS

Paying it forward through IEEE programs

Rallabandi says he believes staying involved with IEEE is important to his career development and a way to give back to the profession. He is a frequent invited speaker at IEEE conferences.

He is active with IEEE Future Networks and its Connecting the Unconnected (CTU) initiative. Members of the Future Networks technical community work to develop, standardize, and deploy 5G and 6G networks as well as successive generations.

CTU aims to bridge the digital divide by bringing Internet service to underserved communities. During itsannual challenge, Rallabandi works with the winning students, researchers, and innovators to help them turn their concepts into affordable, cost-effective options.

Advertisement

“CTU represents the best of IEEE,” he says. “It is about taking innovation out of conferences and into communities that need it the most.

“Connectivity should not be a luxury. Rural communities deserve an infrastructure that fits their needs.”

He participates in the recently launched IEEE Future Networks Empowerment Through Mentorship initiative, which helps innovators, entrepreneurs, and startups expand their companies by educating them about finance, marketing, and related concepts.

“IEEE gives me both a voice and a responsibility,” Rallabandi says. “We’re not just developing technology; we are shaping how humanity connects.”

Advertisement

From Your Site Articles

Related Articles Around the Web

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025