Connect with us
DAPA Banner
DAPA Coin
DAPA
COIN PAYMENT ASSET
PRIVACY · BLOCKDAG · HOMOMORPHIC ENCRYPTION · RUST
ElGamal Encrypted MINE DAPA
🚫 GENESIS SOLD OUT
DAPAPAY COMING

Tech

We Now Know How Many People the CDC Is Monitoring for Hantavirus

Published

on

The US Centers for Disease Control and Prevention is monitoring 41 people in the US for the Andes hantavirus after a cruise ship was hit with a rare outbreak, but the risk to the public remains low, according to health officials.

This includes a group of 18 passengers from the cruise ship who are now in quarantine facilities in Nebraska and Georgia. The agency is also monitoring passengers who returned home before the outbreak was identified and others who were exposed during travel, specifically on flights where a symptomatic case was present.

“Most people under monitoring are considered high-risk exposures, and CDC recommends that everyone under monitoring stay at home and avoid being around people during their 42-day monitoring period,” David Fitter, incident manager for the CDC’s hantavirus response, told reporters during a media briefing on Thursday. “We emphasize not to travel across all these groups.”

The Andes virus is a strain of hantavirus found in South America that can be transmitted from person to person. Typically, hantavirus is passed to humans when they come into contact with rodent droppings or urine. A respiratory virus, the disease can cause difficulty breathing and carries a fatality rate of around 35 percent. As of Thursday, the World Health Organization has confirmed 11 cases of the Andes virus among passengers of the MV Hondius cruise ship, including three deaths.

Advertisement

A Department of Health and Human Services official confirmed to WIRED that all Americans who were on board the Hondius at any point during its journey are now back in the US.

The CDC has legal authority to issue federal quarantine and isolation orders to prevent the spread of certain communicable diseases into or within the US. Fitter said on Thursday that the CDC is not using that authority to manage all 41 of the individuals who were potentially exposed to the hantavirus.

“Our approach is based on risk and evidence,” he said. “We are working closely with passengers and public health partners to ensure monitoring and rapid access to care if symptoms develop. Our goal is to work with them and alongside them, building plans based on their specific situations to protect the health and safety of passengers and American communities.”

Individuals will be monitored for 42 days, which is the amount of time it can take for hantavirus symptoms to appear after exposure. Symptoms begin as flu-like, with fever, muscle aches, and fatigue, then rapidly progress to severe respiratory distress.

Advertisement

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Sony Xperia 1 VIII vs Xperia 1 VII: What’s new?

Published

on

Sony has just announced its latest flagship Android with the Xperia 1 VIII, but how does it measure up to last year’s?

We’ve compared the specs of the new Xperia 1 VIII to the VII and highlighted the key differences and updates between the two. Keep reading to see what’s really new with the Sony Xperia 1 VIII compared with the Xperia 1 VII and decide whether it’s worth updating or not. 

For more options, visit our best Android phones, best smartphones and best camera phones guides instead.


Specs comparison table

Sony Xperia 1 VIII Sony Xperia 1 VII
Colours Graphite Black, Garet Red, Iolite Silver (256GB only) Native Gold (1TB only) Moss Green, Orchid Purple, Slate Black
Dimensions 162 x 74 x 8.3mm 162 x 74 x 8.2mm
Display 6.5-inch FHD+ 6.5-inch FHD+
IP Ratings IPX5, IPX8 and IP6X IPX5, IPX8 and IP6X
Front Camera 12MP 12MP
Rear Cameras 48MP + 48MP + 48MP 48MP + 48MP + 12MP
Battery 5000mAh 5000mAh
UK RRP £1399 £1399
Weight 200g 197g

Price and Availability

At the time of writing, the Sony Xperia 1 VIII is available for pre-order and will launch from mid-June. The handset has a starting RRP of £1399/€1499 for the 256GB iteration, which rises to an eye-watering £1849/€1999 for the 1TB Native Gold version.

Advertisement

Advertisement

Although the Sony Xperia 1 VII shares the same starting RRP of £1399/€1499, we would expect this price to drop as its successor starts to roll out.

Sony Xperia 1 VIII has a new AI Camera Assistant

Sony has unveiled the new AI Camera Assistant within the Xperia 1 VIII which is designed to make “photography even more enjoyable”. Powered by Xperia Intelligence, Sony’s AI technology, the AI Camera Assistant will automatically recognise a scene on camera and suggest different options for your image. It does this by assessing what the subject actually is, plus the weather or lighting conditions to provide suggestions for colour tones, lens effects and bokeh expressions. 

The Xperia 1 VII also uses AI within its camera set-up with AI Camerawork, which ensures your subject always remains in focus. As part of this, there’s Posture Estimation that anticipates human movement while Subject Position Lock maintains a subject’s position in frame.

Advertisement
Xperia 1 VIIXperia 1 VII
Sony Xperia 1 VII camera. Image Credit (Trusted Reviews)

Advertisement

Xperia 1 VIII’s telephoto sensor is around four times larger than the VII’s own

Speaking of photography, one of the reasons to opt for a Sony Xperia is undoubtedly its camera set-up. In fact, their predecessor the Sony Xperia 1 VI has a spot on our best camera phones guide.

One of the biggest upgrades with the Xperia 1 VIII is with its telephoto camera, which now sports a four times larger sensor than the VII’s own at 1/1.56-inches. This, according to Sony, will deliver clear and detailed images “even in low-light conditions”. 

Sony also explains that all lenses will see RAW multi-frame processing which expands dynamic range (HDR) and performs noise reduction in low-lighting too.

Sony Xperia 1 VIII ColoursSony Xperia 1 VIII Colours
Sony Xperia 1 VIII. Image Credit (Sony)

Xperia 1 VIII’s speakers promise better overall sound quality

Both the VIII and VII are equipped with a 3.5mm headphone jack – which is something of a rarity in modern smartphones. The jack supports high-quality audio with wired headphones and claims to offer “exceptional sound quality inherited from Walkman”.

Sony Xperia 1 VIII headphone jackSony Xperia 1 VIII headphone jack
Sony Xperia 1 VIII headphone jack. Image Credit (Google)

Advertisement

However, the VIII also benefits from newly developed speaker units for further advances in stereo performance. The speakers are designed to produce deeper bass, more extended high frequencies and to create a wider and deeper soundstage too. 

Advertisement

Sony says that voices and instruments will be reproduced with greater clarity and richness for a more immersive and engaging audio experience. We’ll have to wait until we review the handset to determine how well the speakers really perform.

Snapdragon 8 Elite Gen 5 vs Snapdragon 8 Elite

Unsurprisingly for a 2026 Android flagship, the Xperia 1 VIII runs on Qualcomm’s Snapdragon 8 Elite Gen 5 chip. Found in many of the best Android phones, we’ve found that Snapdragon 8 Elite Gen 5 offers both a brilliant everyday performance and copes admirably with more intense tasks like gaming or even video editing. In comparison, the Xperia 1 VII runs on last year’s Qualcomm flagship chip, Snapdragon 8 Elite.

Gaming on Sony Xperia 1 VIIGaming on Sony Xperia 1 VII
Sony Xperia 1 VII. Image Credit (Trusted Reviews)

Sony promises that the Xperia 1 VIII sees a 20% improvement in processing speed and performance. Having said that, Snapdragon 8 Elite remains a solid chip that performs well within the Xperia 1 VII, and you’re unlikely to realistically notice much of a difference in everyday use.

Even so, both handsets promise decent efficiency with a two-day battery life.

Advertisement

Advertisement

Xperia 1 VIII houses its cameras in a revamped square bump

Flip the Xperia 1 VIII and VII over and you’ll notice how different their rears are. While the VII looks somewhat reminiscent of the Samsung Galaxy S26, albeit with its three rear cameras in a raised bump, the VIII’s own trio are housed in a square bump instead.

Otherwise, both handsets are equipped with the dedicated camera shutter button that mirrors dedicated cameras and improves shooting.

Sony Xperia 1 VIII

Sony Xperia 1 VII

Advertisement

Early Verdict

With a flagship processor, larger telephoto lens and new design, the Sony Xperia 1 VIII is a promising overall upgrade over its predecessor. However, with a hefty £1399/€1499 starting price, it’s one of the more expensive options currently on the market. 

With this in mind, if you’re still sporting the Xperia 1 VII then there’s really little reason to upgrade. While its design isn’t quite as sleek as its successor, the VII still benefits from a decent chip and a promise of two-days battery too. Plus, now that it’s been succeeded, the year-old Xperia 1 VII is likely to see a decent price drop in the coming weeks – making it a more appealing and affordable option.

Advertisement

Advertisement

Source link

Continue Reading

Tech

Microsoft’s CTO testifies about email at the heart of Elon Musk’s allegations against the tech giant

Published

on

Kevin Scott, Microsoft CTO, in Redmond in May 2025. (GeekWire File Photo / Todd Bishop)

Microsoft CTO Kevin Scott took the stand Wednesday and, for the first time, publicly addressed the internal email that Elon Musk’s lawyers have cited to support allegations that Microsoft knew OpenAI was abandoning its nonprofit mission before investing billions in the company.

That email, sent by Scott on March 7, 2018, read in part, “I wonder if the big OpenAI donors are aware of these plans? Ideologically, I can’t imagine that they funded an open effort to concentrate ML [machine learning] talent so that they could then go build a closed, for-profit thing on its back.”

Musk alleges in the suit that Sam Altman and OpenAI secured his donations to found a nonprofit AI lab and then, with Microsoft’s help, converted it into a for-profit venture that enriched its leaders.

On the stand Wednesday, Scott said he was asking whether OpenAI even had standing to pursue the commercial plans it was pitching to Microsoft, not raising bigger questions about its mission. He explained that both companies were behind Google in AI, that OpenAI had recently left Azure for Google, and that he was worried the conversations would be “a big distraction.” 

Scott said the OpenAI donor he had in mind was not Musk but rather his friend Reid Hoffman, the LinkedIn co-founder, who sits on the Microsoft board.

Advertisement

But later that year, Scott testified, over dinner with Altman and retired Microsoft exec Craig Mundie at Flea Street Cafe in Menlo Park, he learned a key detail: Hoffman, the donor he had wondered about, was actually investing in OpenAI’s new for-profit entity and joining the non-profit board.

Also at the dinner, Scott said he learned that OpenAI was raising a $500 million round, that Altman was leaving Y Combinator to lead the company full time, and that OpenAI had created a new “capped profit” corporate structure as part of the new funding round. Scott called that structure “surprising and interesting” — something he said he had never seen before.

The path to a deal: But Microsoft was still far from committing. Scott testified that the company had “a substantial amount of diligence we needed to do,” including technical, financial, legal, and governance. 

By June 2019, the stakes were becoming more clear. In a confidential memo at the time, filed as an exhibit in the case, Scott and Microsoft CFO Amy Hood formally asked Microsoft’s board to approve a $1 billion investment in OpenAI. Scott warned that Google had used its proprietary AI training infrastructure to pull ahead, and that Microsoft was “scrambling to replicate” the results. 

Advertisement

Without OpenAI, Scott wrote in an appendix to the memo, Microsoft faced “gaps in experience and talent” that would make building its own program “time-consuming and risky.” 

A key part of the strategic case was that Microsoft needed what Scott called a “frontier AI workload” on Azure — a customer pushing the platform at a scale that would reveal what infrastructure needed to be built. Google had that advantage; Microsoft did not.

The board approved the investment. Microsoft announced the deal in July 2019, the first investment in a multi-year partnership that would see the company commit a total of $13 billion to OpenAI.

Within six months of that first deal, the companies had built their first AI supercomputer together, and OpenAI used the computing horsepower to train what would become known as GPT-3.

Advertisement

On the stand Wednesday, Scott called the partnership a success. “I’m very proud of our infrastructure capabilities,” he said, adding that he was proud overall of what Microsoft enabled OpenAI to do.

Pushback from Musk’s team: One of Musk’s lawyers challenged elements of Scott’s account in a brief but pointed cross-examination.

For example, Scott had testified that he did not have any understanding when writing the March 2018 email of whether OpenAI was releasing its technology as open source. Musk’s lawyer showed Scott an email he had received earlier, in which Microsoft chief scientist Eric Horvitz wrote OpenAI had “been sharing their work openly, per their basic tenet.” Scott confirmed he received it. 

Musk’s lawyer also pressed the Scott on whether Microsoft had conducted legal due diligence specifically for compliance with nonprofit law. Scott said he didn’t know, adding that the legal work was handled by others on Microsoft’s team.

Advertisement

New financial details: Also on the stand Wednesday, Microsoft corporate development leader Michael Wetter addressed the scale of Microsoft’s commitment to OpenAI. He testified that Microsoft’s total spending related to OpenAI — including its $13 billion in investment commitments, Azure infrastructure, and hosting costs — is “upwards of $100 billion” as of this fiscal year end in June. 

Wetter testified that Microsoft had generated approximately $9.5 billion in direct revenue from the partnership through March 2025. Separately, The Information reported this week that Microsoft’s total OpenAI-related revenue (including Azure server rentals, Copilot sales, and revenue-sharing payments) exceeded $30 billion between 2023 and 2025.

Under their deal announced last fall, Microsoft received a stake of roughly 27% in OpenAI, with a commitment by OpenAI to spend $250 billion on Microsoft’s Azure cloud services. 

On cross-examination by a lawyer for Musk, Wetter acknowledged that Microsoft, having contributed 98% of the capital in OpenAI’s for-profit entity at one point in time, held effective approval rights over major corporate transactions. This is a level of influence Musk’s lawyers have argued amounted to control.

Advertisement

Wetter said Microsoft has never rejected an approval request. 

Under the latest renegotiation of their deal, announced as the trial began, OpenAI gained the ability to serve its products on any cloud platform, ending its exclusive commitment to Azure. Amazon Web Services quickly moved to offer OpenAI’s models on its own platform. 

Microsoft’s license to OpenAI’s technology was extended through 2032 but became non-exclusive, and the companies removed a clause that could have cut Microsoft off from future models if OpenAI declared it had achieved artificial general intelligence. 

Musk’s legal case: Lawyers for the SpaceX and Tesla founder have argued that Microsoft’s approval rights gave it effective control over OpenAI’s transformation from nonprofit to for-profit, and that the company proceeded despite its own CTO flagging the potential problem in 2018.

Advertisement

Microsoft has maintained that it relied on OpenAI’s contractual assurances that the partnership would not violate any third-party rights. Wetter testified that Microsoft found “no conditions related to Elon Musk” in its normal process of due diligence.

Microsoft is named as a defendant in the case on allegations of aiding and abetting what Musk asserts was a breach of charitable trust by Altman and OpenAI in the for-profit conversion. 

What’s next in the suit: Testimony in the case ended around 1 p.m. today in federal court in Oakland. Closing arguments are set for Thursday, with jury deliberations expected to begin on Monday.

The jury will determine whether OpenAI breached its charitable trust and whether Altman and others were unjustly enriched. If the jury finds for Musk, the judge will determine the amount of financial damages.

Advertisement

Musk is seeking up to $134 billion across all defendants, though U.S. District Judge Yvonne Gonzalez Rogers has questioned the methodology behind those financial calculations. Musk, the world’s richest person, has said he would donate the proceeds to charity.

GeekWire reported on today’s proceedings via the court’s audio livestream.

Source link

Advertisement
Continue Reading

Tech

The Real Losers of the Musk v. Altman Trial

Published

on

Attorneys delivered closing arguments in the Musk v. Altman trial on Thursday in a final attempt to convince a judge and jury that their respective clients, Elon Musk and Sam Altman, are the most well-intentioned, truth-telling stewards of OpenAI’s founding nonprofit mission. A judgement could be delivered as soon as next week, ending a decade-long battle between two of the technology industry’s most influential entrepreneurs.

But regardless of the outcome, there is a wide set of losers in this case. Based on ample amounts of evidence, it appears that the people worst off are the employees, policy makers, and members of the public who believed in the mission of a nonprofit research lab—and supported OpenAI because of it. What seemed to take precedent for Musk and OpenAI’s other cofounders at almost every turn was building the world’s leading AI lab—even if that meant creating a multibillion dollar for-profit company in the process.

“It’s hard to see how the public interest is being protected by either of these parties, and that is really what is ultimately at stake in a case about a nonprofit,” says Jill Horwitz, a Northwestern University law professor with expertise in nonprofits and innovation, who listened to the closing arguments. “The public interest in the nonprofit is at risk no matter who wins.”

OpenAI’s stated mission is to ensure artificial general intelligence (AGI) benefits humanity, but humanity is not a party in this case. In practice, OpenAI has spent the last decade attempting to rival multitrillion dollar companies like Google, and build AGI first. Additionally, Musk and Altman have fought tooth and nail to be the ones who control OpenAI.

Advertisement

“Musk and Altman are basically locked in a race to be the first to build superintelligence, and they both rightly fear what the other will do if they win. The rest of us should fear them both,” says Daniel Kokotajlo, a former OpenAI researcher who joined in 2022 and has raised concerns over the company’s safety culture. He was part of a group of former OpenAI researchers that filed an amicus brief in this case against OpenAI’s for-profit conversion, arguing that the nonprofit structure was critical in their decision to join the company.

At trial, OpenAI’s nonprofit was discussed as if it were yet another corporate investor. OpenAI’s lawyers argued that giving the nonprofit a $200 billion stake in the for-profit company is proof that OpenAI is fulfilling its mission. Public advocacy groups disagree that funding alone is sufficient.

“I am among the many people who are glad to see how many philanthropic resources the OpenAI foundation has at its disposal to do good work,” says Nathan Calvin, VP of state affairs for the AI safety nonprofit Encode, which filed an amicus brief opposing OpenAI’s restructuring earlier in this case. “But it’s worth remembering that the nonprofit also has a governance role, and that the mission of the nonprofit is not that of a typical foundation, it is specifically to ensure that AGI benefits all of humanity. Money is important for that goal and is useful all else equal, but it is not the goal in and of itself.”

Origin Story

Evidence revealed in this case suggests Altman and Musk were in agreement about OpenAI launching as a nonprofit and operating much like a typical startup. They shared the goal of beating Google DeepMind in the race to AGI. But creating OpenAI as a nonprofit turned out to be a horribly inconvenient means to winning that race.

Advertisement

Musk has accused Altman, OpenAI’s CEO, and Greg Brockman, its cofounder and president, of straying from the nonprofit’s founding mission. He claims the founders used his $38 million investment to turn OpenAI into an $850 billion company and make several of its cofounders billionaires.

Source link

Continue Reading

Tech

Forced to vibe code at work, programmers say their skills are deteriorating

Published

on


Coders from various companies recently told 404 Media that their initial curiosity about vibe coding has soured as they feel their skills deteriorating while technical debt mounts. Many developers who aren’t being forced to use AI are returning to coding by hand.
Read Entire Article
Source link

Continue Reading

Tech

Cowboy Space raises $275M as it seeks 40-60 employees for new satellite and rocket hub in Seattle

Published

on

(Cowboy Space Corp. Photo)

Cowboy Space Corp., a space startup growing out a new satellite and rocket engineering center in Seattle, raised $275 million in a Series B funding round this week that valued the company at $2 billion.

The Bay Area-based company — formerly known as Aetherflux — was founded in 2024 by CEO Baiju Bhatt, the billionaire co-founder of the trading platform Robinhood.

Cowboy Space is building satellites that double as data centers, powered by solar energy harvested in orbit. The idea is to sidestep the two biggest bottlenecks for AI computing on Earth — the soaring demand for electricity and the scarcity of land and water needed to cool traditional data centers.

The company also builds its own rockets to launch the satellites, and has designed the rocket’s upper stage and the data center as a single unit rather than separate pieces.

Cowboy Space opened a Seattle office earlier this year with a focus on satellite design and rocket/propulsion engineering. A rep for the company told GeekWire Wednesday that they anticipate 40-60 employees in Seattle initially, and there are currently 18 positions advertised across roles including avionics, mechanical engineering, spacecraft design, and software.

Advertisement

Director of Satellite Engineering David Larson, a SpaceX and Amazon vet, and Head of Propulsion Warren Lamont, a Blue Origin and IonQ vet, will be leading the office. The company is not yet sharing details on the specific location for the satellite center.

The startup is competing for talent in the Seattle area with a robust aerospace community of companies big and small. They include Blue Origin, Stoke Space, Aerojet Rocketdyne, Starfish Space, Starcloud, Xplore and many more. SpaceX also produces satellites for its Starlink broadband constellation from its Redmond, Wash., facility and Amazon produces satellites for its Amazon Leo broadband satellite network in Kirkland, Wash.

The company is collaborating with NVIDIA to deploy its Space-1 Vera Rubin Modules in low Earth orbit, and plans to launch its first satellite later this year to demonstrate space-to-Earth power beaming.

Total funding is now $365 million. The latest round was led by Index Ventures, with participation from new investors IVP, Blossom Capital, and SAIC, alongside existing investors Breakthrough Energy Ventures, Construct Capital, Andreessen Horowitz, NEA, Interlagos and Bhatt.

Advertisement

Source link

Continue Reading

Tech

Accelerating Chipmaking Innovation for the Energy-Efficient AI Era

Published

on

This sponsored article is brought to you by Applied Materials.

At pivotal moments in history, progress has required more than individual brilliance. The most consequential breakthroughs — such as those achieved under the Human Genome Project — required a new operating paradigm: Concentrate the world’s best talent around a single mission, establish a common platform, share critical infrastructure, and collapse feedback loops. When stakes are high and timelines are compressed, sequential and siloed innovation simply cannot keep pace.

Today’s AI era is creating an engineering race with similar demands. Every company is pushing to deliver higher-performance AI systems, faster. But performance is no longer defined by compute alone. AI workloads are increasingly dominated by the movement of data: In many cases, moving bits consumes as much — or more — energy than compute itself. As a result, reducing energy per bit can extend system‑level performance alongside gains in peak compute.

The path to energy‑efficient AI therefore runs through system‑level engineering, spanning three tightly interconnected domains:

Advertisement
  • Logic, where performance per watt depends on efficient transistor switching, low‑loss power, and signal delivery through dense wiring stacks.
  • Memory, where surging bandwidth and capacity demands expose the memory wall, with processor capability advancing faster than memory access.
  • Advanced packaging, where 3D integration, chiplet architectures, and high‑density interconnects bring compute and memory closer together — enabling system designs monolithic scaling can no longer sustain.

These domains can no longer be optimized independently. Gains in logic efficiency stall without sufficient memory bandwidth. Advances in memory bandwidth fall short if packaging cannot deliver proximity within thermal and mechanical constraints. Packaging, in turn, is constrained by the precision of both front‑end device fabrication and back‑end integration processes.

In the angstrom era, the hardest problems arise at the boundaries — between compute and memory in the package, front‑end and back‑end integration, and the tightly coupled process steps needed for precise 3D fabrication. And it is precisely this boundary‑driven complexity where the traditional innovation model breaks down.

The Traditional R&D Workflow Is Too Slow for Angstrom‑Era AI

For decades, the semiconductor industry’s R&D model has resembled a relay race. Capabilities are developed in one part of the ecosystem, handed off downstream through integration and manufacturing, evaluated by chip and system designers, and only then fed back for the next iteration. That model worked when progress was dominated by relatively modular steps that could be scaled independently and simply dropped into the manufacturing flow.

But the AI timeline has upended these rules. At angstrom‑scale dimensions, the physics enforces inescapable coupling across the entire stack: materials choices shape integration schemes; integration defines design rules; design rules dictate power delivery; wiring sets thermal budgets; and thermals ultimately constrain packaging scaling. System architects simply cannot wait 10–15 years for each major semiconductor technology inflection to mature.

Representing a roughly $5 billion investment, EPIC is the largest commitment to advanced semiconductor equipment R&D in U.S. history.

Advertisement

A long‑term perspective is essential to align materials innovation with emerging device architectures — and to develop the tools and processes required to integrate both with manufacturable precision. At Applied Materials, together with our customers, we are charting a course across the next 3–4 generations, extending as far as 10 years down the roadmap.

The angstrom era demands that we break down silos and bring together the industry’s best minds — from leading companies to leading academic institutions. If the problem is coupled, the solution must be coupled. If the timeline is compressed, the learning loop must be compressed. It’s not enough to just innovate — we must innovate how we innovate.

EPIC: A Center and Platform for High‑Velocity Co‑Innovation

This is the challenge that Applied Materials EPIC Center is designed to solve.

Representing a roughly US $5 billion investment, EPIC is the largest commitment to advanced semiconductor equipment R&D in U.S. history. When it opens in 2026, it will deliver state‑of‑the‑art cleanroom capabilities built from the ground up to shorten the path from early‑stage research to full‑scale manufacturing. But the facilities are only one component of the model. EPIC is also a platform, an operating system for high-velocity co‑innovation that revolutionizes how ideas move from the lab to the fab.

Advertisement

Diagram comparing traditional and EPIC chip innovation timelines showing 2x faster path EPIC is a platform, an operating system for high-velocity co‑innovation that revolutionizes how ideas move from the lab to the fab.Applied Materials

The EPIC model compresses the traditional workflow. Customer engineers work side‑by‑side with Applied technologists from day one — moving beyond isolated process optimization and downstream handoffs. Within a shared, secure environment, EPIC tightly integrates atomistic modeling, test vehicles, process development, validation, and metrology feedback. Constraints that once surfaced late in development are identified and addressed early.

The result is a potentially 2x faster path that benefits the entire ecosystem under one roof:

  • Chipmakers gain earlier access to Applied’s R&D portfolio, faster learning cycles, and accelerated transfer of next‑generation technologies into high‑volume manufacturing.
  • Ecosystem partners gain earlier access to advanced manufacturing technology and collaboration opportunities that expand what is possible through materials innovation.
  • Academic institutions gain opportunities to strengthen the lab‑to‑fab pipeline and help develop future semiconductor talent.

Building on decades of co‑development, we are reinventing the innovation pipeline with our partners across logic, memory, and advanced packaging to deliver the next leap in energy‑efficient AI.

Accelerating Advanced Logic

Logic remains the engine of AI compute. In the angstrom era, however, system‑level gains are increasingly constrained by power and energy. Extending AI performance now depends on architectures that deliver more performance per watt — accelerating the move to 3D devices such as gate‑all‑around (GAA) transistors, which boost density within a compact footprint while preserving power efficiency.

These architectural shifts are unfolding at unprecedented scale, with the logic roadmap already extending beyond first‑generation GAA toward more advanced designs. One key example is GAA with backside power delivery, which relocates thick power lines to the backside of the wafer, reducing resistive losses and freeing front‑side routing for tighter logic cell integration. Another example brings adjacent GAA PMOS and NMOS transistors closer together while inserting a dielectric isolation wall between them to minimize electrical interference. Further out, complementary FETs (CFETs) push density scaling even more by stacking PMOS and NMOS devices directly atop one another.

Advertisement

While these architectures deliver compelling gains in performance per watt and logic density without relying solely on tighter lithography, they significantly raise integration complexity. Manufacturing a single GAA device today can involve more than 2,000 tightly interdependent process steps. At the same time, wiring stacks continue to grow taller and denser to connect these advanced logic devices. Modern leading‑edge GPUs now in development pack more than 300 billion transistors into an area little larger than a postage stamp, interconnected by over 2,000 miles of wiring.

At this level of complexity, the process steps used to create these precise 3D devices and wiring stacks cannot be optimized independently. Design and process must evolve in lockstep, and materials innovation and fabrication methods must advance alongside device architecture. EPIC’s co‑innovation model is designed to accelerate exactly this convergence — enabling logic compute to continue advancing the frontiers of AI at the pace the roadmap demands.

Powering the Memory Roadmap

At the same time, the AI computing era is fundamentally reshaping how data is generated, moved, and processed — making memory technologies, especially DRAM, central to delivering the energy‑efficient performance AI systems require. As models grow larger and more data‑hungry, the DRAM roadmap is shifting toward architectures that deliver higher density, greater bandwidth, and faster access per watt.

At the DRAM cell level, this shift is driving a transition from 6F² buried‑channel array transistors (BCAT) to more compact 4F² architectures, which orient the transistor vertically to boost density and reduce chip area. Looking beyond 4F², sustaining gains in performance per watt will require moving past what 2D scaling alone can deliver. The industry is therefore turning to 3D DRAM, stacking memory cells vertically to add capacity within a constrained footprint. As these structures grow taller and aspect ratios intensify, high-mobility materials engineering in three dimensions becomes increasingly critical to performance and reliability.

Advertisement

Beyond the memory cell array, another powerful lever for DRAM scaling is shrinking the peripheral circuitry, which includes logic transistors and interconnect wiring. One emerging approach places select periphery functions beneath the DRAM array by bonding two wafers — one optimized for the DRAM cells and the other for CMOS logic — using multiple wiring layers.

In parallel, DRAM performance is being extended by leveraging logic‑proven enhancers in the memory periphery. These include mobility boosters such as embedded silicon germanium and stress films, along with wiring upgrades like improved low‑k dielectrics and advanced copper interconnects. Memory manufacturers are also transitioning periphery transistors from planar devices to FinFET architectures, following the logic roadmap to further improve I/O speed. These valuable inflections are central to EPIC’s mission — where they can be co-developed and rapidly validated for next‑generation memory systems.

Driving System Scaling With Advanced Packaging

As data movement becomes the dominant energy cost in AI systems, advanced packaging has emerged as a critical lever for improving system‑level efficiency—shortening interconnect distances, increasing bandwidth density, and reducing the power required to move data between logic and memory.

High‑bandwidth memory (HBM) marks a major inflection along this path. By stacking DRAM dies — scaling to 16 layers and beyond — and placing memory much closer to the processor, HBM enables rapid access to ever‑larger working datasets. This delivers step‑function gains in both bandwidth and energy efficiency.

Advertisement

More broadly, the rise of 3D packages such as HBM underscores why advanced packaging is becoming central to the AI era. Packaging now addresses system‑level constraints that logic and memory device scaling alone can no longer overcome. It also enables a move away from monolithic systems‑on‑chip toward chiplet‑based architectures, as AI workloads increasingly demand flexible designs that combine logic, memory, and specialized accelerators optimized for specific tasks.

A vital technology powering this roadmap is hybrid bonding. With interconnect pitches approaching those of on‑chip wiring, conventional bumps and microbumps run into fundamental limits in density, power, and signal integrity. Hybrid bonding removes these barriers by allowing dramatically higher interconnect and I/O density, supporting a broad range of chiplet architectures — from memory stacking to tighter compute‑memory integration.

As bonded structures like HBM stacks grow larger and more complex, warpage control, die placement, stack alignment, and thermal management become first‑order challenges. EPIC tackles these and other high‑value advanced‑packaging challenges through early, parallel co‑innovation across materials, integration, and manufacturing.

Bringing It All Together

Across logic, memory, and advanced packaging, our industry faces an ambitious roadmap that promises significant gains in energy efficiency for AI systems. But realizing that potential demands breakthrough materials innovation at a time when feature sizes are shrinking, interfaces are multiplying, and process interdependencies are escalating. These challenges cannot be solved on 10–15‑year timelines under the traditional relay‑race model. We must break down silos, align earlier across the ecosystem, and parallelize learning to keep pace with AI’s demands.

Advertisement

In the AI era, progress will be defined by the speed at which lightbulb moments turn into manufacturing and commercialization reality. The only viable path forward is a new innovation model — and EPIC is how we are driving it.

Source link

Continue Reading

Tech

Ireland and Northern Ireland share strong skill commonalities, finds research

Published

on

The Skills Insight Note is the first in the EGFSN’s Skills Insights series for 2026.

The Expert Group on Future Skills Needs (EGFSN) recently published a new ‘Skills Insight Note’ titled Cross Border Skills and Commonalities between Ireland and Northern Ireland. The research explores the labour markets of both Northern Ireland and the Republic of Ireland, with a particular focus on cross‑border workers, sectoral employment trends, education profiles and shared skills priorities.

The research – the first in the EGFSN’s Skills Insights series for 2026 – identified strong similarities between the Republic of Ireland and Northern Ireland, including a continued reliance on critical sectors, such as manufacturing, health and education, and a shared policy focus on future‑oriented skills in areas such as digitalisation, the green economy and apprenticeships.

Welcoming the data, Minister for Enterprise, Tourism and Employment Peter Burke, TD noted the importance of gaining insight into how both jurisdictions can cooperate effectively. 

Advertisement

“This Skills Insight Note provides valuable analysis of the labour market links and shared challenges between Ireland and Northern Ireland. The findings underline the importance of collaboration in skills development, particularly as both economies adapt to technological and demographic change. 

“Understanding these cross‑border dynamics strengthens our ability to plan effectively for enterprise growth, employment and long‑term competitiveness.”

Commuting figures, particularly from Northern Ireland to the Republic, were also found to have grown significantly over the course of the last 10 years. The research stated that this is reflective of labour market opportunities and shared economic strengths.

Minister of State with special responsibility for Trade Promotion, AI and Digital Transformation Niamh Smyth, TD said: “The findings clearly demonstrate the strong links that exist across the two jurisdictions, including shared skills priorities, sectoral strengths and growing levels of cross‑border commuting.

Advertisement

“This research highlights how closely connected our labour markets are and the opportunities that exist to address shared skills challenges through cooperation and coordinated policy approaches.”

Late last year, a new €9.85m cross-border project aiming to address critical public health challenges was launched in Belfast. The four-year OneHealth project is a health and life sciences partnership that will use AI and digital health approaches to tackle pressing health and agrifood challenges.

The initiative is being led by science and technology hub Catalyst in partnership with Atlantic Technology University, Queen’s University Belfast, Health Innovation Research Alliance Northern Ireland, Tyndall National Institute Cork and the University of Galway.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Tech

IEEE Society ‘s Pitch Sessions Link Lab With Market

Published

on

The IEEE Communications Society (ComSoc)’s Research Collaboration Pitch Session initiative is proving to be a catalyst for meaningful engagement between academic researchers and industry innovators. Launched last year, the program connects promising researchers with industry leaders who can offer them funding, mentorship, and connections to bring interesting ideas closer to real-world deployment.

Rather than relying on chance encounters at conferences, the pitch sessions create a focused environment. Five academic presenters share their work with five industry representatives, known as “innovation scouts”: senior leaders primarily chosen from ComSoc’s Corporate Program partner companies such as Ericsson, Intel, Keysight, and Nokia. The curated format ensures that each idea receives dedicated attention from professionals who are seeking new concepts aligned with their organization’s priorities.

The initiative was launched in November at the IEEE Middle East Conference on Communications and Networking (MECOM) in Cairo and appeared in December at the IEEE Global Communications Conference (GLOBECOM) in Taipei, Taiwan.

AI-driven communication network

One of the most compelling outcomes came from the inaugural session in Cairo. Angela Waithaka, a student member and biomedical engineering student at Kenyatta University, in Nairobi, Kenya, presented her “AI-Driven Predictive Communication Networks for Enhanced Performance in Resource-Constrained Environments” paper. You can view her presentation along with others on IEEE.tv.

Advertisement

Waithaka’s research tackles a critical challenge: Next-generation communication systems increasingly rely on artificial intelligence and machine learning, yet most existing architectures consume abundant computational and energy resources, which are not always present in developing regions.

Waithaka proposed lightweight, adaptive AI/machine learning models capable of delivering predictive, reliable communication performance even under tight resource constraints.

Her vision resonated with Ruiqi “Richie” Liu, a master researcher at ZTE in China. ZTE is a global leader in integrated information and communication technology solutions. Liu says he recognized the relevance Waithaka’s proposal had to his company’s work with the International Telecommunication Union. He invited her to establish an ITU account so she could participate in the organization’s meetings discussing global telecommunications standardization projects—which would elevate her work to an international stage.

Simplifying data center protocols

The momentum continued at GLOBECOM. Among the presenters was Nirmala Shenoy, a professor at the Rochester Institute of Technology, in New York. Shenoy, an IEEE member, spoke on the topic of simplifying data center network protocols. She highlighted the growing complexity of the critical networks, which underpin cloud services, enterprise IT, and emerging AI workloads.

Advertisement

Shenoy’s focus on reducing protocol complexity while maintaining scalability, resilience, and low latency caught the attention of an innovation scout from Nokia, who heads its eXtended Reality Lab in Madrid. He found the key person at Nokia for Shenoy to connect with to discuss her research, and it led her to record a video for the company detailing her approach and its potential applications.

A model for accelerating innovation

The early success stories demonstrate the power of intentional, structured engagement. By bringing researchers and industry leaders together in a format designed for discovery, ComSoc is helping accelerate innovation and expand opportunities for collaboration. The pitch sessions are not merely conference events; they are becoming a bridge between academic creativity and industry implementation.

This year sessions will be held during the IEEE International Conference on Communications in Glasgow from 24 to 28 May, and more are scheduled during the IEEE International Mediterranean Conference on Communications and Networking in Sardinia from 6 to 9 July, and at GLOBECOM in Macau from 7 to 11 December.

As the program continues to grow, it could become a signature ComSoc initiative, one that strengthens the research ecosystem, supports emerging talent, and ensures that promising ideas find pathways to real-world impact.

Advertisement

From Your Site Articles

Related Articles Around the Web

Source link

Advertisement
Continue Reading

Tech

Your next free Google account might only come with 5GB of storage

Published

on

Google has quietly altered one of the most reliable promises in consumer tech: 15GB of free cloud storage. For years, signing up for a Google account meant getting 15GB of free storage, shared across Gmail, Drive, and Photos. However, that’s changed. 

New accounts are now defaulting to 5GB (same as iCloud), with the full 15GB available only if you have entered your phone number during setup. The prompt users are seeing reads: “Your account includes 5GB of storage. Now get even more storage space with your phone number.”

What exactly changed?

The policy change took effect sometime around March 18, 2026 (via 9To5Google). That’s when the company updated its support page language from definitive to conditional. Initially, the support page read “Your Google account comes with 15GB of cloud storage at no charge.”

Now, it has been updated to say “up to 15GB of cloud storage at no charge.” And Google didn’t announce the change via a tweet or a blog post, as it does for every other update that comes out for consumer-centric products. 

It is during the account setup that users are now seeing two explicit choices: link a phone number to get 15GB of storage or keep 5GB. 

Advertisement

Why is Google doing this?

Google wants to make sure that the 15GB storage is offered to users only once, and not as many times as they create a new account. Linking the free storage to users’ phone numbers is, I’d say, a smart move, as it’s much more difficult to get a new number than to create a new Google account.

So, the company is positioning the change as an anti-duplication measure rather than anything else. A Google spokesperson has also confirmed to Endgadget that this is a regional test, which is why some users are still able to access the 15GB free storage without verifying their phone number. 

At the same time, I’d also like to draw your attention to the timing of this change. Only recently did Google expand the available storage for AI Pro subscribers from 1TB to 5TB, and now, it’s enforcing a tighter space for free users. Ultimately, we should all prepare for slimmer free storage margins.

Source link

Advertisement
Continue Reading

Tech

What the jury will actually decide in the case of Elon Musk vs. Sam Altman

Published

on

Nine California jurors are now deliberating over the future of OpenAI, the world-leading artificial intelligence lab.

While the trial exploring Elon Musk’s case against OpenAI’s other cofounders and Microsoft has covered territory ranging from the breakup of the founders in 2018 to Altman’s firing and rehiring in 2023, the jurors will be considering a set of fairly narrow questions.

  • Breach of charitable trust — essentially, did OpenAI and cofounders Sam Altman and Greg Brockman violate a specific agreement with Musk to use his donations to OpenAI for a specific, charitable purpose and not general use by the non-profit?
  • Unjust enrichment — did the defendants use Musk’s donations to enrich themselves through OpenAI’s for-profit arm, instead of for charitable purposes?
  • Aiding and abetting breach of charitable trust — Did Microsoft, through its interactions with OpenAI, know that Musk had specific conditions on its donations, and play a significant role in causing harm to Musk?

OpenAI has also made three arguments in its defense that the jury will weigh:

  • Statute of limitations — a legal deadline by which a lawsuit must be filed. Here, if OpenAI can prove that any harms to Musk happened before August 5, 2021 for the first count; August 5, 2022 for the second count; and November 14, 2021 for the first count, then his claims will be moot.
  • Unreasonable delay — Musk, by filing his lawsuit in 2024, delayed his claim in a way that made his request for damages unreasonable.
  • Unclean hands — a legal doctrine holding that Musk’s conduct related to his claims against OpenAI was unconscionable and renders them invalid.

If Musk wins out, it could mean the end of OpenAI as a for-profit company, but it’s not entirely clear what will result. Next week, the judge will begin a set of new hearings where lawyers from both sides will debate what the consequences of a verdict in favor of the plaintiffs might be. That process could be rendered moot by a negative verdict, however.

Breach of charitable trust

Musk’s attorneys say the defendants clearly understood that Musk wanted to support a non-profit that would ensure the benefits of AI to the world, and prevent it from being controlled by any one organization. In particular, they say a $10 billion investment from Microsoft in 2023 into OpenAI’s for-profit affiliate—the first to happen after the statute of limitations—was the event that turned Musk’s concern into conviction.

That deal, Musk’s lawyers say, was different from previous investments and led to OpenAI’s investors being enriched by the company’s commercial products, at the expense of the charitable mission of AI safety that Musk promoted.

Advertisement

OpenAI’s attorneys have asked every witness to describe specific restrictions put on Musk’s donations, and none have, including his financial adviser Jared Birchall, his chief of staff Sam Teller, or his special adviser Shivon Zilis. They say everyone involved agreed that private fundraising would be required to achieve its goals, and note that Musk himself attempted to launch an OpenAI-affiliated for-profit he would personally control, and later to merge OpenAI into his company Tesla. They also note the organization’s other donors haven’t said their charitable trust was violated.

Importantly, a forensic accountant hired by OpenAI testified that all of Musk’s donations had been used by OpenAI well before the key date of August 5, 2021. That is evidence that Musk’s donations were already used for their purpose well before he brought his lawsuit, invalidating any charitable trust that may have existed.

Mainly, they insist that the for-profit affiliate that conducts most of OpenAI’s actual activity continues to fulfill the organization’s mission, and has generated nearly $200 billion in equity value to support the non-profit foundation. Notably, Sam Altman argued that providing ChatGPT for free helps fulfill the mission of sharing the benefits of AI with the world.

Unjust enrichment

The plaintiffs point to the multibillion-dollar valuations of stakes held by OpenAI founders like Brockman and Ilya Sutskever, as well as Microsoft itself, as a sign that Musk’s donations were ultimately used for personal benefit, as opposed to supporting the mission of the charity. They argue that the work at OpenAI’s for-profit was commercially focused, while the foundation itself was left essentially dormant, without full-time employees, and, ultimately, not even in control of the for-profit.

Advertisement

OpenAI says all of Musk’s contributions were used by the foundation by 2020, and that equity distributions came well after he left the organization in 2018. Even beforehand, evidence shows the key players agreed that being able to compensate researchers with stock was key to developing AGI, the hypothetical form of AI capable of performing any intellectual task a human can. OpenAI executives maintain that the for-profit’s work meaningfully advanced the foundation’s mission, including safety activities. They say the non-profit board continues to control the for-profit, and instituted new governance controls following “the blip,” when Altman was fired by OpenAI’s non-profit board in 2023 for lack of candor and then rehired just days later.

Aiding and abetting

Musk’s case focused on the events of the blip, when Microsoft CEO Satya Nadella, whose company depended on OpenAI’s tech, was personally involved with helping to bring Altman back and creating a new board to govern OpenAI. They note that Microsoft executives wondered if their commercial agreement might conflict with the non-profit’s goals, and suggest that Microsoft’s commercial priorities led OpenAI away from its mission. They’ve focused attention on a clause in Microsoft’s agreement with OpenAI that gave Microsoft veto rights over major corporate decisions at OpenAI.

Microsoft’s witnesses have insisted that the company’s executives didn’t know of any specific conditions on Musk’s donations despite extensive due diligence, and never vetoed any decision by OpenAI. They note that the company’s investments and compute power allowed OpenAI to achieve its biggest triumphs.

Statute of Limitations

Musk has suggested that his skepticism of his cofounders grew over time, until in the fall of 2022 he finally decided they had betrayed him when he found out about Microsoft’s plans for a new $10 billion investment that took place in 2023. He wouldn’t file his lawsuit until mid-2024.

Advertisement

OpenAI’s attorneys argue that the terms of that deal were spelled out in a term sheet for a previous fundraising round in 2018, which Musk received and his advisers reviewed, but Musk said he didn’t read in detail. They also note numerous blog posts and other communications from over the years that show Musk could have known what OpenAI was doing well before he brought them to court, including tweets where Musk criticized the company years before the suit. Zilis, Musk’s adviser, even voted to approve these transactions as a member of the OpenAI board.

Ultimately, the OpenAI attorneys emphasize that Musk’s formal role in the organization ended in 2018 and his last donations took place in 2020.

Unreasonable delay

OpenAI’s attorneys say the real reason that Musk filed his suit was he realized that he was wrong about OpenAI, after its launch of ChatGPT revolutionized the business of artificial intelligence. They argue that OpenAI has operated under its current structure since its first Microsoft investment in 2018, and that forcing the organization to restructure eight years later is unreasonable.

Unclean hands

There is evidence that Musk was planning his own competing AI efforts while he was still the chair of OpenAI, and hired OpenAI employees to work on AI at Tesla. OpenAI’s attorneys argue that these efforts undermined OpenAI at a time when it was using Musk’s donations to pursue its mission. They noted that Zilis, the mother of three of Musk’s children, didn’t disclose her personal relationship to other OpenAI board members for years. And they argue that Musk withheld his donations in 2017 in an effort to win control of a planned for-profit affiliate of OpenAI. Finally, “Mr. Musk abandoned OpenAI for dead in 2018,” Bill Savitt, OpenAI’s lead attorney, told the jury.

Advertisement

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

Source link

Continue Reading

Trending

Copyright © 2025