Connect with us
DAPA Banner

Tech

What Will It Take to Build the World’s Largest Data Center?

Published

on

The undying thirst for smarter (historically, that means larger) AI models and greater adoption of the ones we already have has led to an explosion in data-center construction projects, unparalleled both in number and scale. Chief among them is Meta’s planned 5-gigawatt data center in Louisiana, called Hyperion, announced in June of 2025. Meta CEO Mark Zuckerberg said Hyperion will “cover a significant part of the footprint of Manhattan,” and the first phase—a 2-GW version—will be completed by 2030.

Though the project’s stated 5-GW scale is the largest among its peers, it’s just one of several dozen similar projects now underway. According to Michael Guckes, chief economist at construction-software company ConstructConnect, spending on data centers topped US $27 billion by July of 2025 and, once the full-year figures are tallied, will easily exceed $60 billion. Hyperion alone accounts for about a quarter of that.

For the engineers assigned to bring these projects to life, the mix of challenges involved represent a unique moment. The world’s largest tech companies are opening their wallets to pay for new innovations in compute, cooling, and network technology designed to operate at a scale that would’ve seemed absurd five years ago.

At the same time, the breakneck pace of building comes paired with serious problems. Modern data-center construction frequently requires an influx of temporary workers and sharply increases noise, traffic, pollution, and often local electricity prices. And the environmental toll remains a concern long after facilities are built due to the unprecedented 24/7 energy demands of AI data centers which, according to one recent study, could emit the equivalent of tens of millions of tonnes of CO2 annually in the United States alone.

Advertisement

Regardless of these issues, large AI companies, and the engineers they hire, are going full steam ahead on giant data-center construction. So, what does it really take to build an unprecedentedly large data center?

AI Rewrites Building Design

The stereotypical data-center building rests on a reinforced concrete slab foundation. That’s paired with a steel skeleton and poured concrete wall panels. The finished building is called a “shell,” a term that implies the structure itself is a secondary concern. Meta has even used gigantic tents to throw up temporary data centers.

Still, the scale of the largest AI data centers brings unique challenges. “The biggest challenge is often what’s under the surface. Unstable, corrosive, or expansive soils can lead to delays and require serious intervention,” says Robert Haley, vice president at construction consulting firm Jacobs. Amanda Carter, a senior technical lead at Stantec, said a soil’s thermal conductivity is also important, as most electrical infrastructure is placed underground. “If the soil has high thermal resistivity, it’s going to be difficult to dissipate [heat].” Engineers may take hundreds or thousands of soil samples before construction can begin.

GPUs

Yellow microchip icon on a black background.

Modern AI data centers often use rack-scale systems, such as the Nvidia GB200 NVL72, which occupy a single data-center rack. Each rack contains 72 GPUs, 36 CPUs, and up to 13.4 terabytes of GPU memory. The racks measure over 2.2 meters tall and weigh over one and a half tonnes, forcing AI data centers to use thicker concrete with more reinforcement to bear the load.

Advertisement

A single GB200 rack can use up to 120 kilowatts. If Hyperion meets its 5-gigawatt goals, the data-center campus could include over 41,000 rack-scale systems, for a total of more than 3 million GPUs. The final number of GPUs used by Hyperion is likely to be less than that, though only because future GPUs will be larger, more capable, and use more power.

Money

Black hand and dollar symbol combined on an orange background.

According to ConstructConnect, spending on data centers neared US $27 billion through July of 2025 and, according to the latest data, will tally close to $60 billion through the end of the year. Meta’s Hyperion project is a big slice of the pie, at $10 billion.

Data-center spending has become an important prop for the construction industry, which is seeing reduced demand in other areas, such as residential construction and public infrastructure. ConstructConnect’s third quarter 2025 financial report stated that the quarter’s decline “would have been far more severe without an $11 billion surge in data center starts.”

Advertisement

There’s apparently no shortage of eligible sites, however, as both the number of data centers under construction, and the money spent on them, has skyrocketed. The spending has allowed companies building data centers to throw out the rule book. Prior to the AI boom, most data centers relied on tried-and-true designs that prioritized inexpensive and efficient construction. Big tech’s willingness to spend has shifted the focus to speed and scale.

The loose purse strings open the door to larger and more robust prefabricated concrete wall and floor panels. Doug Bevier, director of development at Clark Pacific, says some concrete floor panels may now span up to 23 meters and need to handle floor loads up to 3,000 kilograms per square meter, which is more than twice the load international building codes normally define for manufacturing and industry. In some cases, the concrete panels must be custom-made for a project, an expensive step that the economics of pre-AI data centers rarely justified.

Simultaneously, the time scale for projects is also compressed: Jamie McGrath, senior vice president of data-center operations at Crusoe, says the company is delivering projects in “about 12 months,” compared to 30 to 36 months before. Not all projects are proceeding at that pace, but speed is universally a priority.

That makes it difficult to coordinate the labor and materials required. Meta’s Hyperion site, located in rural Richland Parish, Louisiana, is emblematic of this challenge. As reported by NOLA.com, at least 5,000 temporary workers have flocked to the area, which has only about 20,000 permanent residents. These workers earn above-average wages and bring a short-term boost for some local businesses, such as restaurants and convenience stores. However, they have also spurred complaints from residents about traffic and construction noise and pollution.

Advertisement

This friction with residents includes not only these obvious impacts, but also things you might not immediately suspect, such as light pollution caused by around-the-clock schedules. Also significant are changes to local water tables and runoff, which can reduce water quality for neighbors who rely on well water. These issues have motivated a few U.S. cities to enact data-center bans.

Data Centers Often Go BYOP (bring your own power)

Meta’s Richland Parish site also highlights a problem that’s priority No. 1 for both AI data centers and their critics: power.

Data centers have always drawn large amounts of power, which nudged data-center construction to cluster in hubs where local utilities were responsive to their demands. Virginia’s electric utility, Dominion Energy, met demand with agreements to build new infrastructure, often with a focus on renewable energy.

The power demands of the largest AI data centers, though, have caught even the most responsive utilities off guard. A report from the Lawrence Berkeley National Laboratory, in California, estimated the entire U.S. data-center industry consumed an average load of roughly 8 GW of power in 2014. Today, the largest AI data-center campuses are built to handle up to a gigawatt each, and Meta’s Hyperion is projected to require 5 GW.

Advertisement

“Data centers are exasperating issues for a lot of utilities,” says Abbe Ramanan, project director at the Clean Energy Group, a Vermont-based nonprofit.

Ramanan explains that utilities often use “peaker plants” to cope with extra demand. They’re usually older, less efficient fossil-fuel plants which, because of their high cost to operate and carbon output, were due for retirement. But Ramanan says increased electricity demand has kept them in service.

Meta secured power for Hyperion by negotiating with Entergy, Louisiana’s electric utility, for construction of three new gas-turbine power plants. Two will be located near the Richland Parish site, while a third will be located in southeast Louisiana.

Entergy frames the new plants as a win for the state. “A core pillar of Entergy and Meta’s agreement is that Meta pays for the full cost of the utility infrastructure,” says Daniel Kline, director of power-delivery planning and policy at Entergy. The utility expects that “customer bills will be lower than they otherwise would have been.” That would prove an exception, as a recent report from Bloomberg found electricity rates in regions with data centers are more likely to increase than in regions without.

Advertisement

CO2

Diagram of CO2 molecule with black carbon and red oxygen atoms connected by lines.

Research published in Nature in 2025 projects that data-center emissions will range from 24 million to 44 million CO2-equivalent metric tonnes annually through 2030 in the United States alone. While some materials used in data centers, such as concrete, lead to significant emissions, the majority of these emissions will result from the high energy demands of AI servers.

Estimating the carbon emissions of Hyperion is difficult, as the project won’t be completed until 2030. Assuming that the three new natural gas plants that are planned for construction as part of the project produce emissions typical for their type, however, the plants could lead to full life-cycle emissions of between 4 million and 10 million metric tons of CO2 annually—roughly equivalent to the annual emissions of a country like Latvia.

Concrete

Silhouette of a cement truck on an orange background.

Data centers are typically built from concrete, with steel used as a skeleton to reinforce and shape the concrete shell. While the foundation is often poured concrete, the walls and floors are most often built from prefabricated concrete panels that can span up to 23 meters. Floors use a reinforced T-shape, similar to a steel girder, measuring up to 1.2 meters across at its thickest point. The largest data centers include hundreds of these concrete panels.

The America Cement Association projects that the current surge in building will require 1 million tonnes of cement over the next three years, though that’s still a tiny fraction of the overall cement industry, which weighed in at roughly 103 million tonnes in 2024.

Advertisement

The plants, which will generate a combined 2.26 GW, will use combined-cycle gas turbines that recapture waste heat from exhaust. This boosts thermal efficiency to 60 percent and beyond, meaning more fuel is converted to useful energy. Simple-cycle turbines, by contrast, vent the exhaust, which lowers efficiency to around 40 percent.

Even so, total life-cycle emissions for the Hyperion plants could range from 4 million to over 10 million tonnes of CO2 each year, depending on how frequently the plants are put in use and the final efficiency benchmarks once built. On the high end, that’s as much CO2 as produced by over 2 million passenger cars. Fortunately, not all of Meta’s data centers take the same approach to power. The company has announced a plan to power Prometheus, a large data-center project in Ohio scheduled to come online before the end of 2026, with nuclear energy.

But other big tech companies, spurred by the need to build data centers quickly, are taking a less efficient approach.

Advertisement

xAI’s Colossus 2, located in Memphis, is the most extreme example. The company trucked dozens of temporary gas-turbine generators to power the site located in a suburban neighborhood. OpenAI, meanwhile, has gas turbines capable of generating up to 300 megawatts at its new Stargate data center in Abilene, Texas, slated to open later in 2026. Both use simple-cycle turbines with a much lower efficiency rating than the combined-cycle plants Entergy will build to power Hyperion.

Demand for gas turbines is so intense, in fact, that wait times for new turbines are up to seven years. Some data centers are turning toward refurbished jet engines to obtain the turbines they need.

AI Racks Tip the Scales

The demand for new, reliable power is driven by the power-hungry GPUs inside modern AI data centers.

In January of 2025, Mark Zuckerberg announced in a post on Facebook that Meta planned to end 2025 with at least 1.3 million GPUs in service. OpenAI’s Stargate data center plans to use over 450,000 Nvidia GB200 GPUs, and xAI’s Colossus 2, an expansion of Colossus, is built to accommodate over 550,000 GPUs.

Advertisement

GPUs, which remain by far the most popular for AI workloads, are bundled into human-scale monoliths of steel and silicon which, much like the data centers built to house them, are rapidly growing in weight, complexity, and power consumption.

Memory

Outlined head with a microchip brain on blue background, symbolizing AI and technology.

In addition to raw compute performance, Nvidia GB200 NVL72 racks also require huge amounts of memory. An Nvidia GB200 NVL72 rack may include up to 13.4 terabytes of high-bandwidth memory, which implies a data-center campus at Hyperion’s scale will require at least several dozen petabytes.

The immense demand has sent memory prices soaring: The price of DRAM, specifically DDR5, has increased 172 percent in 2025.

Power

Hyperion is expected to use 5 gigawatts of power across 11 buildings, which works out to just under 500 megawatts per building, assuming each will be similar to its siblings. That’s enough to power roughly 4.2 million U.S. homes.

Advertisement

Just one Hyperion data center built at the Richland Parish site will consume twice as much power as xAI’s Colossus which, at the time of its completion in the summer of 2024, was among the largest data centers yet built.

Nvidia’s GB200 NVL72—a rack-scale system—is currently a leading choice for AI data centers. A single GB200 rack contains 72 GPUs, 36 CPUs, and up to 17 terabytes of memory. It measures 2.2 meters tall, tips the scales at up to 1,553 kilograms, and consumes about 120 kilowatts—as much as around 100 U.S. homes. And this, according to Nvidia, is just the beginning. The company anticipates future racks could consume up to a megawatt each.

Viktor Petik, senior vice president of infrastructure solutions at Vertiv, says the rapid change in rack-scale AI systems has forced data centers to adapt. “AI racks consume far more power and weigh more than their predecessors,” says Petik. He adds that data centers must supply racks with multiple power feeds, without taking up extra space.

Advertisement

The new power demands from rack-scale systems have consequences that are reflected in the design of the data center—even its footprint.

In 2022 Meta broke ground on a new data center at a campus in Temple, Texas. According to SemiAnalysis, which studies AI data centers, construction began with the intent to build the data center in an H-shaped configuration common to other Meta data centers.

LAND

Black location pin icon on orange background.

Meta CEO Mark Zuckerberg kicked off the buzz around Hyperion by saying it would cover a large chunk of Manhattan. Many took that to mean Hyperion would be a single building of that size, which isn’t correct. Hyperion will actually be a cluster of data centers—11 are currently planned—with over 370,000 square meters of floor space. That’s a lot smaller even than New York City’s Central Park, which covers 6 percent of Manhattan.

Advertisement

Meta has room to grow, however. The Richland Parish site spans 14.7 million square meters in total, which is about a quarter the area of Manhattan. And the 370,000 square meters of floor space Hyperion is expected to provide doesn’t include external infrastructure, such as the three new combined-cycle gas power plants Louisiana utility Entergy is building to power the project.

Map with site layout and regional location in Louisiana, showing roads and distances.

Construction was paused midway in December of 2022, however, as part of a company-wide review of its data-center infrastructure. Meta decided to knock down the structure it had built and start from scratch. The reasons for this decision were never made public, but analysts believe it was due to the old design’s inability to deliver sufficient electricity to new, power-hungry AI racks. Construction resumed in 2023.

Meta’s replacement ditches the H-shaped building for simple, long, rectangular structures, each flanked by rows of gas-turbine generators. While Meta’s plans are subject to change, Hyperion is currently expected to comprise 11 rectangular data centers, each packed with hundreds of thousands of GPUs, spread across the 13.6-square-kilometer Richland Parish campus.

Advertisement

Cooling, and Connecting, at Scale

Nvidia’s ultradense AI GPU racks are changing data centers not only with their weight, and power draw, but also with their intense cooling and bandwidth requirements.

Data centers traditionally use air cooling, but that approach has reached its limits. “Air as a cooling medium is inherently inferior,” says Poh Seng Lee, head of CoolestLAB, a cooling research group at the National University of Singapore.

Instead, going forward, GPUs will rely on liquid cooling. However, that adds a new layer of complexity. “It’s all the way to the facilities level,” says Lee. “You need pumps, which we call a coolant distribution unit. The CDU will be connected to racks using an elaborate piping network. And it needs to be designed for redundancy.” On the rack, pipes connect to cold plates mounted atop every GPU; outside the data-center shell, pipes route through evaporation cooling units. Lee says retrofitting an air-cooled data center is possible but expensive.

The networking used by AI data centers is also changing to cope with new requirements. Traditional data centers were positioned near network hubs for easy access to the global internet. AI data centers, though, are more concerned with networks of GPUs.

Advertisement

These connections must sustain high bandwidth with impeccable reliability. Mark Bieberich, a vice president at network infrastructure company Ciena, says its latest fiber-optic transceiver technology, WaveLogic 6, can provide up to 1.6 terabytes per second of bandwidth per wavelength. A single fiber can support 48 wavelengths in total, and Ciena’s largest customers have hundreds of fiber pairs, placing total bandwidth in the thousands of terabits per second.

a piece of land with a big platform in the middle.

This is a point where the scale of Meta’s Hyperion, and other large AI data centers, can be deceptive. It seems to imply the physical size of a single data center is what matters. But rather than being a single building, Hyperion is actually a set of buildings connected by high-speed fiber-optics.

“Interconnecting data centers is absolutely essential,” says Bieberich. “You could think about it as one logical AI training facility, but with geographically distributed facilities.” Nvidia has taken to calling this “scale across,” to contrast it with the idea that data centers must “scale up” to larger singular buildings.

The Big but Hazy Future

The full scale of the challenges that face Hyperion, and other future AI data centers of similar scale, remain hazy. Nvidia has yet to introduce the rack-scale AI GPU systems it will host. How much power will it demand? What type of cooling will it require? How much bandwidth must be provided? These can only be estimated.

Advertisement

In the absence of details, the gravity of AI data-center design is pulled toward one certainty: It must be big. New data-center designers are rewriting their rule book to handle power, cooling, and network infrastructure at a scale that would’ve seemed ridiculous five years ago.

This innovation is fueled by big tech’s fat wallet, which shelled out tens of billions of dollars in 2025 alone, leading to questions about whether the spending is sustainable. For the engineers in the trenches of data-center design, though, it’s viewed as an opportunity to make the impossible possible.

“I tell my engineers, this is peak. We’re being engineers. We’re being asked complicated questions,” says Stantec’s Carter. “We haven’t got to do that in a long time.”

This article appears in the April 2026 print issue.

Advertisement

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Researchers are using ultrasound to trigger smell directly in the brain for VR

Published

on


Current systems emphasize sight and sound, with some progress in haptics. Smell remains largely absent, despite its unusually strong connection to memory and emotion.
Read Entire Article
Source link

Continue Reading

Tech

Flash Joule Heating Recovers The Good Stuff

Published

on

Rare earth materials are a hot button topic these days. They’re important for everything from electric vehicles to defence hardware, they’re valuable, and everyone wishes they had some to dig up in their backyard. Lithium, too, is a commodity nobody can get enough of, with the demand for high-performance batteries grows each year.

When a material is desirable, and strategically important, we often start thinking of ways to conserve or recycle it because we just can’t get enough. In that vein, researchers have been developing a new technique to recover rare earth metals and lithium from waste streams so that it can be put back to good use.

Get It Back

Enter the technique of flash joule heating. The method is relatively straightforward, in concept at least. It involves a high energy discharge from a capacitor bank, which is passed through a sample of material to be recycled or refined. The idea is that the rapid energy discharge will vaporize some components of the sample, while leaving others intact, allowing the desired material to be separated out and collected in a straightforward and economically-viable manner.  It does this in a manner rather contrary to traditional techniques, which often involve large amounts of water, acids, or alkalis, which can be expensive and messy to dispose of or reprocess to boot.

A flash joule heating apparatus used to recover rare earth materials. Credit: Jeff Fitlow, Rice University

Researchers from Rice have developed this technique to recycle rare earth metals from waste magnets. Imagine all the magnets that get thrown away when things like hard drives and EV motors get trashed, and you can imagine there’s a wealth of rare earth material there just waiting to be recovered.

In this case, the high-energy discharge is applied to waste magnet material in an effort to vaporize the non-rare earth components that are present. The discharge is performed in the presence of chlorine gas, which would chlorinate materials like iron and cobalt in the sample, removing the volatile elements and leaving the rare earth elements behind in solid form. Laboratory experiments were able to refine the material to 90% purity in a single step.

Advertisement
In the rare earth case, the undesired material is vaporized and removed by the chlorine gas while the rare earths remain behind in the solid phase. For capturing lithium from spodumene ore, it’s the opposite. Credit: research paper

As per the research paper, lifecycle analysis suggested the technique could reduce energy use by 87% compared to contemporary hydrometallurgy recycling techniques, while also reducing greenhouse gas emissions in turn and slashing operating costs by 54%.

The technique can also be applied to separate lithium from spodumene ore. It’s an abundant material, particularly in the United States, and improved ways to process it could increase its value as a source of lithium. When it comes to processing spodumene with flash joule heating, the discharge of electric current makes the lithium in spodumene available to react with chlorine gas. The rapid heating causes the vaporized lithium to form lithium chloride which can be bled off, while other components of spodumene like aluminium and silicon compounds remain behind. It’s basically the opposite of the rare earth recovery method.

As outlined in the research paper, this method achieved recovery of lithium chloride with 97% purity and a recovery rate of 94% in a single step. It’s also a lot simpler than traditional extraction methods that involve long periods of evaporating brine or using acid leeching techniques. Indeed, the laboratory rig was built using an arc welder to achieve the powerful discharge. Other researchers are examining the technique too and achieving similar results, hoping that it can be a cleaner and more efficient method of recovery compared to traditional hydrometallurgy and pyrometallurgy techniques.

The lithium recovery process using flash joule heating. Credit: research paper

These methods remain at the research stage for the time being. Pilot plants, let alone commercial operations, are still a future consideration. Regardless, the early work suggests there is economic gain to be had by developing recycling plants that operate in this manner. Assuming the technique works at scale, if it makes financial sense and recovers useful material, expect it to become a viable part of the recycling industry before long.

 

Advertisement

Source link

Continue Reading

Tech

Coral raises $12.5M to automate healthcare’s administrative back office

Published

on

The New York startup has built AI that reads handwritten fax forms, processes prior authorisations, and completes patient intakes in under five minutes, all without asking providers to change how they work. It has reached multiple millions in revenue in under a year and is targeting 4x growth by end of 2026.


Coral, the New York-based AI startup automating administrative workflows for specialty healthcare providers, has raised $12.5 million in a Series A led by Lightspeed and Z47.

The company was founded in 2024 by Ajay Shrihari, a robotics and AI researcher, and Aniket Mohanty, who has a background in medical image processing.

In under a year of commercial operation, Coral has reached multiple millions in annual revenue and is targeting 4x growth before the end of 2026.

Advertisement

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

The problem Coral is solving is not technological complexity, it is administrative volume. In American healthcare, every appointment generates a trail of prior authorisation requests, referral packets, insurance eligibility checks, and discharge paperwork.

Much of this flows through fax machines, which remain deeply embedded in clinical workflows despite being a technology from a previous era.

Advertisement

Rather than attempting to replace fax infrastructure, an approach that would require providers to rebuild systems they cannot afford to rebuild, Coral connects to existing EHR systems, fax lines, and payer portals and automates around them.

Providers do not change how they work. Coral changes what happens inside that workflow.

The company began in the durable medical equipment sector, one of the most fax-intensive corners of outpatient care, where a single order can require multiple rounds of documentation before approval.

DASCO, a home medical equipment provider, has been an early customer, describing turnaround times dropping from hours or days to minutes.

Advertisement

Coral then extended the same model into infusion centres, where a delayed authorisation means a missed dose, not a delayed appointment, and into specialty pharmacy.

In each new vertical, the same administrative bottleneck appeared in the same shape.
The product’s core capability is document understanding at healthcare’s specific level of messiness: handwritten fax forms, scanned insurance cards, prior authorisation templates, and payer portal screens.

Coral’s models have reached 99.7% accuracy across these document types, a threshold the company describes as the minimum viable standard for healthcare, where errors have clinical and financial consequences.

Complete patient intakes, including complex cases, now run in under five minutes. When information is missing, which is frequent in this environment, the platform coordinates with payers, patients, and referral sources to resolve the gap without requiring staff intervention.

Advertisement

The strongest signal in the commercial story is not the revenue figure but the payment behaviour. A portion of Coral’s customers are paying the full contract value upfront, an unusual dynamic in enterprise software, and a striking one in a sector where vendor evaluation cycles are typically slow and risk-averse.

The explanation is mechanical: when a workflow that previously took hours completes in under five minutes at high accuracy, the return on investment is immediate and visible. Commit now, stop the queue now.

Coral recently shipped AI-powered voice and text workflows that automate follow-ups with payers, patients, and referral sources, replacing calls that previously required a staff member to pick up the phone.

The next phase of product development includes an AI workflow builder that will let providers design and deploy their own administrative processes without involving IT, and a co-pilot layer that surfaces operational intelligence from the data already flowing through the platform: which payers have the highest denial rates and why, where cases are stalling in the authorisation process, which referral sources convert reliably and which do not, and what changes would improve outcomes on insurance claim resubmissions.

Advertisement

Rohil Bagga, investor at Lightspeed, described the company as “delivering real outcomes at scale” in an environment where legacy automation has historically failed.

Ashwin KP, investor at Z47, framed the investment thesis around the specific characteristics of healthcare administration: over a trillion dollars in annual overhead, chronically underserved by technology, and requiring deep vertical expertise to crack.

The Series A funds team growth and product development, with Coral adding engineering talent alongside people who have spent careers inside healthcare operations.

Advertisement

Source link

Continue Reading

Tech

iPhone Ultra Launch Ahead: Six Big Upgrades Expected

Published

on

Apple is expected to introduce its first foldable iPhone later this year, and early reports suggest it may be called the iPhone Ultra. Newest leaks from tipster Jon Prosser suggest the device could bring one of the biggest changes to the iPhone lineup in years, especially in terms of design and usability. Here are six major upgrades that the iPhone Ultra is expected to offer.

Foldable Design with a New Look

iPhone Ultra Front Design
Image: FPT

The iPhone Ultra is expected to come with a completely new foldable design. Instead of a regular smartphone shape, it may open like a book, giving users a much larger screen when unfolded. It will also have a wider design instead of the usual tall shape seen in other foldables. For example, while using the outside screen, the user will have a smaller screen measuring 5.3 to 5.5 inches. Once unfolded, the second screen will expand up to 7.8 inches, bringing the user experience closer to that of an iPad mini.

The use of a titanium frame may help make it durable while keeping it lightweight. Another key highlight is the expected crease-free inner screen, which could improve the overall viewing experience. In terms of looks, the device may be limited to black-and-white color options.

Like other folding phones, TouchID will probably find its way back. It’s much easier to use a fingerprint sensor on the power button than to integrate Face ID sensors into both displays.

Software & Camera Configuration

Camera design of the iPhone Ultra
Image: FPT

One of the key differences between the iPhone Ultra and Pro models is the camera configuration. Unlike other models, the iPhone Ultra will have only two cameras. One will be a primary camera with a 48 MP sensor, while the other will be an ultra-wide camera with a 48 MP sensor. Unfortunately, since there won’t be a telephoto lens, zooming options may be limited for the users. Besides, the dual screen will require two front-facing cameras.

The iOS 27 is likely to introduce new multitasking features designed for the iPhone Ultra. Among the expected improvements are multi-app functionality, where users can perform multiple functions simultaneously, and app designs that more closely match what the iPad offers, particularly when used on the inner display. It is not going to be iPadOS but rather selected elements from the operating system.

Advertisement

Everything will be handled by the new A20 Pro chip, which may work on the 2nm manufacturing process. It’s very early to judge the performance numbers, but we are expecting the iPhone Ultra to feature 12 GB RAM and use the new C2 modem.

Expected Price

Apple is expected to position the iPhone Ultra as a premium product. The device is expected to start at around $1,999, making it Apple’s most expensive iPhone yet. However, since it offers both phone- and tablet-like experiences in a single device, some users may find the premium pricing justified.

Source link

Advertisement
Continue Reading

Tech

ASUS Drop Zone Service Now Available in More Cities Across India

Published

on

When Asus launched the Drop Zone program last year, it was seen as a commendable gesture to make repairs less taxing for consumers. Now, keeping in the same vein, Asus is expanding its Drop Zone initiative in India by adding 22 new stores to the network. The program, which allows users to submit laptops for servicing at ASUS Exclusive Stores instead of dedicated service centers, is now being rolled out across multiple regions, including Delhi NCR, Haryana, Karnataka, Kerala, Maharashtra, Tamil Nadu, Uttarakhand, Uttar Pradesh, and West Bengal.

What is Asus Drop Zone Service?

The Drop Zone initiative is designed to simplify the repair process by allowing customers to drop off and collect their devices at nearby ASUS stores. This eliminates the need to travel to service centers, which can often be inconvenient—especially for users in tier-2 and tier-3 cities.

With this expansion, ASUS is clearly trying to address common pain points like accessibility, turnaround time, and service transparency. Customers also get multiple service options, including carry-in support for immediate consultation, on-site servicing by technicians, and the Drop Zone model for easier logistics.

ASUS says it already has a wide after-sales network in India, with over 200 service centers and on-site support covering more than 17,000 pin codes across 761 districts. The Drop Zone expansion adds another layer to this ecosystem, bringing services closer to users. The company also offers 24/7 support through calls, chat, email, and remote troubleshooting. Speaking on the matter, Arnold Su, VP, Consumer and Gaming PC, System Business Group, ASUS India, said

Advertisement

At ASUS, our focus has always been on delivering a reliable and consistent ownership experience that extends well beyond the product itself. The expansion of our Drop Zone initiative into 22 additional stores marks a significant step towards making after-sales support more accessible and transparent for our customers. Guided by our 4A framework, we remain committed to building a service ecosystem that is responsive, convenient, and aligned with evolving customer needs.

Source link

Continue Reading

Tech

Company discards 32GB server RAM sticks worth $20,000

Published

on


At current market prices, the hardware appears valuable. Comparable SK hynix registered DDR4 modules currently sell for about $287.95 each, putting the total value at more than $20,000. However, that figure reflects today’s pricing, not what the hardware was worth when it was removed from service.
Read Entire Article
Source link

Continue Reading

Tech

Klipsch OJAS kO-R2 Speaker Debuts at Milan Design Week 2026: Only 600 Pairs, Don’t Expect Them to Last Long

Published

on

Klipsch is returning to Milan Design Week 2026 with something that goes beyond another product launch; it’s a continuation of one of the more interesting collaborations in modern hi-fi. Following the limited-run kO-R1 in 2024, Klipsch and OJAS have officially unveiled the kO-R2, a new loudspeaker created with Devon Turnbull, the artist and acoustic designer behind OJAS, as part of Klipsch’s 80th anniversary.

That matters more than the usual show-floor debut. The first kO-R1 wasn’t just a speaker, it was a statement about where heritage audio could go when handed to someone outside the traditional engineering echo chamber. Turnbull approached Klipsch’s horn-loaded DNA with a minimalist, almost gallery-first mindset, and the result landed somewhere between serious hi-fi and functional art. It sold out quickly and didn’t need a stack of Audio Science Review graphs to justify itself. Turns out art and musical enjoyment still carry more weight than rigid objectivism.

The kO-R2 builds directly on that foundation. Klipsch and OJAS describe it as a blend of minimalist design, advanced acoustic thinking, and bespoke materials, with an emphasis on form that’s meant to live as comfortably in a design exhibition as it does in a listening room. There are no performance specifications or pricing details yet, which feels intentional. This isn’t being positioned as a spec war product; it’s being framed as a continuation of an idea.

ko-r2-loudspeaker-oak
Klipsch OJAS kO-R2

And that’s the real story. At a time when much of the industry is chasing incremental upgrades and feature checklists, Klipsch is doubling down on a collaboration that prioritizes identity, experience, and cultural relevance. Bringing the kO-R2 to Milan Design Week instead of a traditional audio show makes that point clear: this is as much about design language and audience expansion as it is about sound.

Whether the kO-R2 ultimately delivers on the acoustic side will come later. For now, Klipsch and OJAS have done something more difficult; they’ve made people outside the usual audiophile bubble pay attention. 

Advertisement

Unveiled at Milan Design Week 2026

Set against the backdrop of the Fondazione Luigi Rovati, in partnership with USM Modular Furniture and Karimoku, Klipsch and OJAS are hosting curated, appointment-only listening sessions during Milan Design Week through April 26, 2026. Those who get access are encouraged to bring their own music, turning the kO-R2 preview into something more personal than the usual show-floor demo.

After its debut in Milan, a broader launch for the kO-R2 is expected in June 2026.

“Working with Klipsch continues to be an exploration of how we can strip audio down to its most essential, emotional core,” said Devon Turnbull. “With the kO-R2, we focused on creating something that feels immediate and human—where the technology disappears, and the listener is left with a pure, physical connection to the music.”

kO-R2 Design Concept

The kO-R2 is a two-way, sectoral horn-loaded loudspeaker positioned as the next step in the Klipsch x OJAS collaboration. It’s handcrafted in Hope, Arkansas, by the same team behind Klipsch’s legacy designs, and features an OJAS-developed multisectoral horn paired with Baltic birch cabinetry. The goal is clear: deliver the dynamic, low-distortion traits horn systems are known for, while presenting something that looks just as considered as it sounds.

Advertisement
ko-r2-loudspeaker
Klipsch OJAS kO-R2 Loudspeaker in Hammertone Silver.

The core of the latest speaker design is the OJAS 1506 multisectoral horn, fabricated from heavy cast aluminum and finished with electrophoresis and a flat black powder coat.

The exponential horn pulls from classic Western Electric and Altec Lansing design cues, but it’s not a straight throwback. The square, isosceles trapezoidal mouth is doing real work here, controlling dispersion in both planes rather than just looking the part. The result should be more even frequency distribution and a wider, more stable listening window, which is exactly what these older horn concepts were chasing in the first place.

Advertisement. Scroll to continue reading.

The kO-R2 leans into a restrained, material-first design without skimping on the hardware. It uses a high-quality compression driver, anodized aluminum binding posts, and anti-vibration feet—nothing flashy, just components that make sense for a horn-loaded design like this.

Details like the laser-engraved metal ID plate add a layer of exclusivity without turning it into a gimmick, and the five-step high-frequency attenuator is there for a reason: dialing in top-end energy to match the room and placement, which matters more with horns than most speaker types.

Advertisement

Calling it a “museum piece” isn’t entirely off base, but the real goal here isn’t to redefine audiophile expectations. It’s to bridge two worlds that don’t usually overlap this cleanly: serious acoustic design and industrial design that people actually want to live with.

The kO-R2 represents a powerful intersection of heritage and forward-thinking design. Partnering with Devon allows us to honor Klipsch’s 80-year legacy while pushing into new creative territory—delivering a product that is as culturally relevant as it is acoustically exceptional,said Vinny Bonacorsi, COO of Klipsch.

Klipsch OJAS Logo

The Bottom Line 

This isn’t a typical brand crossover. Klipsch is working within its core strength—horn-loaded design—while Devon Turnbull brings a different perspective on how these systems look and live in real spaces. The kO-R2 builds on the kO-R1 with a larger, more complex horn and a move to a floorstanding design, which should translate into greater scale and output.

There are still no detailed specifications or pricing, but the context matters. The kO-R1 launched at $8,498 per pair and sold out quickly. For the kO-R2, production is expected to be limited to around 600 pairs, so availability is going to be tight from the start.

It’s aimed at a specific buyer: someone who values both the design and the underlying acoustic approach, and who is comfortable buying into the concept without a full data sheet upfront. Between the prior pricing and limited run, this won’t be a mainstream Klipsch product—and that’s the point.

Advertisement
Klipsch OJAS kO-R2 Loudspeakers
Klipsch OJAS kO-R2 Loudspeaker in Red Oak veneer.

Price & Availability

Once released (expected to be June 2026), 600 pairs of the kO-R2 will be available worldwide in either Red Oak veneer or Hammertone Silver with a powder-coated, matte-black horn. Price has yet to be announced

Source link

Advertisement
Continue Reading

Tech

Who Owns Carroll Gas Stations?

Published

on





Two entrepreneurs, Benson Phelps and Carroll Faye, teamed up to open a small coal and wood delivery company in Baltimore in 1907. The business saw success in its early years, expanding rapidly over its first couple of decades. Faye decided to move on to other ventures and sold his stake in the business to Phelps, but the company continued to use Faye’s first name as its brand. The Carroll Independent Fuel Company began selling oil in the 1930s under the guidance of Phelps, and it never stopped. Today, drivers can still buy fuel from the same company, although they’ll now recognize it as Carroll Motor Fuels.

The Carroll network of gas stations might have grown significantly over its century-plus of trading, but its ownership structure has remained consistent. It’s still an independent, family-owned business, with various members of the Phelps family at the helm. John Phelps serves as the company’s CEO and President, while Richard B. Phelps III holds the title of Executive Vice President alongside C. Howard Phelps. Several more Phelps family members hold leadership roles.

Carroll isn’t the only gas station chain that has remained family owned since its inception. The Love’s chain of gas stations is also still owned by members of its founding family, and it has risen to become one of America’s largest privately owned companies.

Advertisement

The Carroll network operates under multiple brands

Alongside its own-brand gas stations, Carroll Independent Fuel also operates stations under various other names. The East Coast chain’s network includes stations that use Sunoco branding, which is most famously associated with the NASCAR Cup Series. Other locations are branded as BP gas stations, with Carroll working with the British-owned oil company since 2006.

Advertisement

In 2012, Carroll Independent Fuel also acquired High’s, a Baltimore-based chain of convenience stores. In an interview with the Baltimore Business Journal, Executive Vice President Howard Phelps said that the company realized that “competition on the gasoline retail side was transitioning to convenience,” and that Carroll wanted to “to go toe to toe” with rivals like Sheetz and Wawa.

The Carroll network continues to grow, with the company acquiring seven new sites in 2022. The new locations helped develop its network outside the company’s home state of Maryland, with Delaware, New Jersey, and Pennsylvania all seeing new Carroll-operated locations launched.

Advertisement



Source link

Continue Reading

Tech

Fluidic Contact Lens Treats Glaucoma

Published

on

We’ve always been interested in fluidic computers, a technique that uses moving fluids to perform logic operations. Now, Spectrum reports that researchers have developed an electronics-free contact lens that monitors glaucoma and can even help treat it.

The lens is made entirely of polymer and features a microfluidic sensor that can monitor eye pressure in real time. It also has pressure-activated drug reservoirs that dispense medicine when pressure exceeds a fixed threshold. You can see Spectrum’s video on the device below.

This isn’t the first attempt to treat glaucoma, which affects more than 80 million people, with a contact lens. In 2016, Triggerfish took a similar approach, but it used electronic components in the lens, which poses problems for manufacturing and for people wearing them.

Naturally, the device depends on 3D printed molds to create channels and reservoirs in the lens. A special silk sponge in the reservoirs can absorb up to 2,700 times its weight. One sponge holds a red fluid that is forced by pressure into a serpentine microchannel. A phone app uses a neural network to convert the image of the red fluid into a pressure reading.

Advertisement

Two more sponges hold drugs that release at a given pressure determined by the width of the associated microchannel. This allows the possibility of increasing the dose at a higher pressure or even delivering two drugs at different pressure levels.

It is fairly hard to hack your own contact lenses, although we’ve seen it at least once. But smart contacts are not as rare as you might think.

Advertisement

Source link

Continue Reading

Tech

‘Han Solo Wants to Be Me’: Artemis II’s Victor Glover on Flying the Orion

Published

on

Even if you’re 250,000 miles from Earth, sleep is important. However, for all the life-sustaining accoutrements aboard the Orion spacecraft, the capsule lacked bedrooms, leaving the four-person Artemis II crew with a truly bizarre sleeping arrangement.

“I slept really close to an air conditioning vent. And so I’d wake up and I just see this big hunk of metal,” Glover told CNET during a video call. “And it was like, ‘Oh, I’m in space. I am weightless.’”

Sleep wasn’t just a means for the astronauts to recharge; it also grounded them during their historic journey. Glover explained, “What really resonated with me is we’re also humans. It’s like camping, and this is a very important part of this journey.”

Advertisement

Watch this: Artemis II’s Victor Glover Chats With CNET

Artemis II was the first crewed mission to the moon in over 50 years. It followed Artemis I, a 2022 uncrewed mission that was the first for NASA’s new Space Launch System rocket and Orion spacecraft. The goal for Artemis II was to have a crew test the spacecraft, life support systems, the SLS rocket and the procedures needed for future lunar missions that will involve landing on the moon and eventually building a base there.

Glover, the Orion’s pilot, along with commander Reid Wiseman and mission specialists Christina Koch and Jeremy Hansen, made up the Artemis II crew. The mission made a lot of history. It’s the first time a woman, a Black man or a Canadian has journeyed to the moon. The four Artemis II astronauts traveled 252,756 miles from Earth, farther than any other human being, surpassing the record set by the 1970 Apollo 13 mission.

Advertisement
Artemis II's Orion capsule in deep space

This image of NASA’s Orion spacecraft was taken with a camera mounted on its solar array wings.

NASA

This wasn’t Glover’s first time in space. In 2020, with a Falcon 9 rocket for liftoff, he piloted the Crew Dragon capsule to and from the International Space Station for NASA’s SpaceX Crew-1 mission, spending over 167 days in space. But Artemis II gave Glover the opportunity to be the first to fly the Orion, a new vehicle designed for Artemis missions. For the majority of the nearly 10-day journey, Orion was on autopilot. But Glover had several opportunities to take manual control of the spacecraft to test its handling.

“It was such a treat and a joy,” Glover said about flying the Orion. “It was a test pilot’s dream to fly a new spaceship for the first time by hand.”

Advertisement

Even after spending time training to fly in a simulator back on Earth, he was surprised by how responsive the Orion’s hand controller was and by the clarity of the cameras, used to maneuver the craft around the Interim Cryogenic Propulsion Stage that holds the fuel for the upper stage of liftoff. He said the view from the cameras and monitors was like “looking out a window.”

Artemis II's Victor Glover looking off to the side

Artemis II astronaut and pilot Victor Glover wears an orange flight suit.

NASA

When I asked Glover if he felt like Han Solo when piloting the Orion, he retorted, “Han Solo wants to be me when he grows up!” Throughout my interview, Glover was gracious, passionate and funny.

Advertisement

“I get to do stuff that’s cooler than Han Solo. I mean, just the fact that it’s real, it’s better.”

While landing on the moon wasn’t in the cards for this trip, the Orion crew traveled about 4,000 miles beyond the moon, allowing them to see parts of the moon that had never been seen before. For comparison, Apollo missions flew about 70 miles above the moon to make landings, limiting how much of it they could actually see.

Earth seen as a bright blue and white crescent just over the dimly lit brown surface of the moon

Earthset captured through the Orion spacecraft window at 6:41 p.m. EDT, April 6, 2026, during the Artemis II crew’s flyby of the moon.

Advertisement

NASA

The images that Glover and the crew took of the moon were stunning. Shots like the Earthset were a reminder of how beautiful our planet is and our place within the solar system. The astronauts even witnessed a total solar eclipse as they rounded the far side of the moon. But none of the photos they took compares to what they saw, according to Glover.

“I could see the curvature of the moon. Depth is just one aspect that you cannot see in the pictures. But here’s the other thing, the pictures lack scale.”

The moon, half in light, half in dark

When the Artemis II flew over the terminator, the crew said that this boundary between day and night was “anything but a straight line,” according to NASA.

Advertisement

NASA

For the lunar flyby, the Orion was moving fast: 60,863 mph relative to Earth, but only 3,139 mph relative to the moon, according to NASA. The speed meant the shadows across the surface were constantly morphing into different shapes. Glover was particularly enamored with the moon’s terminator, where the light and dark sides of the moon meet. The terminator isn’t fixed and depends on the moon’s position relative to the sun. As Orion moved, it transformed into various shapes that looked like letters of the alphabet.

“People know, I fell in love with the terminator when I got to see the real one up close. I watched the terminator go from a letter C to a letter D, which means there was a point when the moon was half light, half dark. It was pointing right at me.”

Four astronauts huddled together wearing eclipse glasses.

The Artemis II astronauts take a selfie of themselves wearing eclipse glasses using an iPhone 17 Pro Max.

Advertisement

NASA

Artemis II’s lunar flyby was a highlight of the journey for many of us on Earth, in part because we could watch it in real time on streaming services like Netflix. Nearly the entire mission was streamed live on NASA’s website and YouTube channel, making it feel like a reality show. One minute you’re watching the crew eat, work out, take photos of the moon; the next, there’s a random jar of Nutella floating by one of the cameras. I asked Glover whether it felt like he was on a TV show while on the Orion.

“It did not feel like a reality show on my end,” said Glover. “For you to see the science and hear us describing the moon, and to see us flying the spaceship by hand, and to see bedtime and bath time and teeth brush time, that’s what it’s like. The mission was all of those things.”

Glover was ecstatic to hear how I and others felt so connected to the crew during their mission. He said it was important to NASA to let the world in on everything it took to send four people a quarter of a million miles away.

“I think that maybe one of the really, most special things about this mission is how much you were able to see,” Glover said with a smile. “It makes me feel good that you felt like you were there.”

Advertisement

Watch this: Getting Personal With the Crew of Artemis II | Tech Today

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025