Hewlett Packard Enterprise (HPE) has patched multiple security vulnerabilities in the Aruba Networking AOS-CX operating system, including several authentication and code execution issues.
AOS-CX is a cloud-native network operating system (NOS) developed by HPE subsidiary Aruba Networks for the company’s CX-series campus and data center switch devices.
The most severe security flaw today is a critical authentication bypass vulnerability (tracked as CVE-2026-23813) that attackers without privileges can exploit in low-complexity attacks to reset admin passwords.
“A vulnerability has been identified in the web-based management interface of AOS-CX switches that could potentially allow an unauthenticated remote actor to circumvent existing authentication controls. In some cases this could enable resetting the admin password,” HPE said.
Advertisement
“HPE Aruba Networking is not aware of any public discussion or exploit code targeting these specific vulnerabilities as of the release date of the advisory.”
IT admins who can’t immediately apply today’s security updates to patch vulnerable switches can take one of the following mitigation measures:
Restrict access to all management interfaces to a dedicated Layer 2 segment or VLAN to isolate management traffic.
Implement strict policies at Layer 3 and above to control access to management interfaces, allowing only authorized and trusted hosts.
Disable HTTP(S) interfaces on Switched Virtual Interfaces (SVIs) and routed ports wherever management access is not required.
Enforce Control Plane Access Control Lists (ACLs) to protect any REST/HTTP-enabled management interfaces, ensuring only trusted clients are allowed to connect to the HTTPS/REST endpoints.
Enable comprehensive accounting, logging, and monitoring of all management interface activities to detect and respond to unauthorized access attempts.
HPE has yet to find publicly available proof-of-concept exploit code or evidence that attackers are abusing the vulnerabilities in the wild.
In July 2025, the company also warned of hardcoded credentials in Aruba Instant On Access Points that could allow attackers to bypass standard device authentication.
One month earlier, HPE patched eight vulnerabilities in its StoreOnce disk-based backup and deduplication solution, including another critical-severity authentication bypass and three remote code execution flaws.
HPE has over 61,000 employees worldwide, has reported revenues of $30.1 billion in 2024, and provides services and products to over 55,000 enterprise customers worldwide, including 90% of Fortune 500 companies.
Malware is getting smarter. The Red Report 2026 reveals how new threats use math to detect sandboxes and hide in plain sight.
Download our analysis of 1.1 million malicious samples to uncover the top 10 techniques and see if your security stack is blinded.
Though the AirPods Max 2 offer new features, a teardown of the headphones shows they’re still plagued by the same flaws of the original 2020 model.
Apple’s AirPods Max 2 gained the H2 chip, but not much else.
Apple’s AirPods Max 2 debuted on March 16, with their core feature being the H2 chip. With it, Apple’s high-end headphones gained capabilities like Adaptive Audio, Conversation Awareness, and gesture controls, among others. Active Noise Cancellation was improved as well. However, as explained in our review, the AirPods Max 2 are an iterative upgrade, that ultimately leaves something to be desired. New features and ANC enhancements aside, Apple effectively delivered more of the same with its AirPods Max 2. Continue Reading on AppleInsider | Discuss on our Forums
Meta has paused all its work with the data contracting firm Mercor while it investigates a major security breach that impacted the startup, two sources confirmed to WIRED. The pause is indefinite, the sources said. Other major AI labs are also reevaluating their work with Mercor as they assess the scope of the incident, according to people familiar with the matter.
Mercor is one of a few firms that OpenAI, Anthropic, and other AI labs rely on to generate training data for their models. The company hires massive networks of human contractors to generate bespoke, proprietary datasets for these labs, which are typically kept highly secret as they’re a core ingredient in the recipe to generate valuable AI models that power products like ChatGPT and Claude Code. AI labs are sensitive about this data because it can reveal to competitors—including other AI labs in the US and China—key details about the ways they train AI models. It’s unclear at this time whether the data exposed in Mercor’s breach would meaningfully help a competitor.
While OpenAI has not stopped its current projects with Mercor, it is investigating the startup’s security incident to see how its proprietary training data may have been exposed, a spokesperson for the company confirmed to WIRED. The spokesperson says that the incident in no way affects OpenAI user data, however. Anthropic did not immediately respond to WIRED’s request for comment.
Mercor confirmed the attack in an email to staff on March 31. “There was a recent security incident that affected our systems along with thousands of other organizations worldwide,” the company wrote.
Advertisement
A Mercor employee echoed these points in a message to contractors on Thursday, WIRED has learned. Contractors who were staffed on Meta projects cannot log hours until—and if—the project resumes, meaning they could functionally be out of work, a source familiar claims. The company is working to find additional projects for those impacted, according to internal conversations viewed by WIRED.
Mercor contractors were not told exactly why their Meta projects were being paused. In a Slack channel related to the Chordus initiative—a Meta-specific project to teach AI models to use multiple internet sources to verify their responses to user queries—a project lead told staff that Mercor was “currently reassessing the project scope.”
An attacker known as TeamPCP appears to have recently compromised two versions of the AI API tool LiteLLM. The breach exposed companies and services that incorporate LiteLLM and installed the tainted updates. There could be thousands of victims, including other major AI companies, but the breach at Mercor illustrates the sensitivity of the compromised data.
Mercor and its competitors—such as Surge, Handshake, Turing, Labelbox, and Scale AI—have developed a reputation for being incredibly secretive about the services they offer to major AI labs. It’s rare to see the CEOs of these firms speaking publicly about the specific work they offer, and they internally use codenames to describe their projects.
Advertisement
Adding to the confusion around the hack, a group going by the well-known name Lapsus$ claimed this week that it had breached Mercor. In a Telegram account and on a BreachForums clone, the actor offered to sell an array of alleged Mercor data, including a 200-plus GB database, nearly 1 TB of source code, and 3 TBs of video and other information. But researchers say that many cybercriminal groups now periodically take up the Lapsus$ name and that Mercor’s confirmation of the LiteLLM connection means that the attacker is likely TeamPCP or an actor connected to the group.
TeamPCP appears to have compromised the two LiteLLM updates as part of an even larger supply chain hacking spree in recent months that has been gaining momentum, catapulting TeamPCP to prominence. And while launching data extortion attacks and working with ransomware groups, such as the group known as Vect, TeamPCP has also strayed into political territory, spreading a data wiping worm known as “CanisterWorm” through vulnerable cloud instances with Farsi as their default language or clocks set to Iran’s time zone.
“TeamPCP is definitely financially motivated,” says Allan Liska, an analyst for the security firm Recorded Future who specializes in ransomware. “There might be some geopolitical stuff as well, but it’s hard to determine what’s real and what’s bluster, especially with a group this new.”
Looking at the dark-web posts of the alleged Mercor data, Liska adds, “There is absolutely nothing that connects this to the original Lapsus$.”
Intel Core Ultra 270K Plus improves Adobe Premiere workflows by 15% over 9700X
Rendering in Cinebench and Blender achieves up to 23% faster results
250K Plus outperforms previous-generation AMD CPUs by roughly 35%
Intel’s latest Core Ultra 200S Plus series has drawn attention for delivering performance that is difficult to ignore, especially compared to older Intel models and some similarly priced AMDprocessors.
In testing by Puget Systems, the 270K Plus and 250K Plus both increase E-core counts, boost clocks, and raise maximum memory speeds, creating a tangible improvement over prior generations.
While AMD’s Ryzen 9 X3D chips remain strong in certain workloads, the new Intel chips close gaps in many professional applications.
Article continues below
Advertisement
Performance in rendering and content creation
In CPU-based rendering in applications like Cinebench, V-Ray, and Blender, the Core Ultra 7 270K Plus demonstrates impressive results, performing up to 9% of the higher-priced 9950X3D, while frequently outpacing other CPUs in the same price bracket by up to 23%.
The 250K Plus also shows substantial gains, often matching or beating older high-end AMD chips, with improvements of about 35% over the 245K.
Advertisement
These performance improvements tie not just to additional cores but also to enhancements in memory latency and bandwidth.
In Adobe Premiere, the 270K Plus performs as well as or slightly better than previous high-end Intel models, offering a 15% advantage over the 9700X.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
This trend continues across intraframe codecs (13% faster than 245K), RAW processing (30% faster than 9700X), and QuickSync-accelerated workflows.
Advertisement
After Effects shows a slightly mixed picture: while the 270K Plus handles 2D tasks efficiently, 3D and tracking workloads favor AMD’s Ryzen chips.
DaVinci Resolve shows a similar balance, with the 270K Plus leading marginally in several CPU-bound tasks while GPU-bound processes show little difference between models.
In Unreal Engine shader compilation and Visual Studio builds, AMD’s 3D V-Cache processors maintain some lead, but the 270K Plus outperforms older Intel models by up to 100% in some cases.
Advertisement
Compilation times in particular show major gains over the 9700X, with improvements ranging from 15% to nearly 100% depending on the test scenario.
The 250K Plus also shows strong relative performance, often outpacing CPUs that were previously considered superior at the same price point.
Tests using Llama and MLPerf benchmarks reveal modest CPU-level improvements – and while the integrated NPU could not be directly assessed, the 270K Plus consistently handles small-model inference faster than earlier Intel offerings.
This trend is consistent across content creation and professional workloads, where the new chips deliver strong performance gains without commanding a premium price.
Advertisement
Considering its $299 price and the improvements in memory and E-core architecture, the 270K Plus makes the 9700X, which retails at around $340, look underwhelming.
The pitch is seductive in its simplicity: AI needs more power than terrestrial grids can supply, so move the data centres into orbit, where the sun never sets and the electricity is free. SpaceX, Blue Origin, and a growing constellation of startups are now racing to make that vision real. The problem, according to the scientists and engineers who would have to make the physics work, is that the vision skips several chapters of thermodynamics, economics, and orbital mechanics that have not yet been written.
SpaceX filed with the Federal Communications Commission on 30 January for permission to launch up to one million satellites into low Earth orbit, each carrying computing hardware that would collectively form what the company described as a constellation with “unprecedented computing capacity to power advanced artificial intelligence models.” The satellites would operate at altitudes between 500 and 2,000 kilometres, in orbits designed to maximise time in sunlight, and route traffic through SpaceX’s existing Starlink network. SpaceX requested a waiver of the FCC’s standard deployment milestones, which typically require half a constellation to be operational within six years.
Seven weeks later, Blue Origin filed its own application. Project Sunrise proposes 51,600 satellites in sun-synchronous orbits between 500 and 1,800 kilometres, complemented by the previously announced TeraWave constellation of 5,408 satellites providing ultra-high-speed optical backhaul. Where SpaceX’s filing emphasised raw scale, Blue Origin’s emphasised architecture: the system would perform computation in orbit and relay results to the ground through TeraWave’s mesh network.
The startup ecosystem is moving even faster.Starcloud, formerly Lumen Orbit, raised $170 million at a $1.1 billion valuationin March, becoming the fastest unicorn in Y Combinator history just 17 months after completing the programme. The company launched its first satellite carrying an Nvidia H100 GPU in November 2025 and filed with the FCC in February for a constellation of up to 88,000 satellites. Aethero, a defence-focused startup building space-grade computers with Nvidia Orin NX chips wrapped in radiation shielding, raised $8.4 million and is testing hardware on orbit this year.
The commercial logic rests on a genuine problem.Global data centre electricity consumptionreached roughly 415 terawatt-hours in 2024 and the International Energy Agency projects it could exceed 1,000 TWh by 2026, with accelerated AI servers driving 30 per cent annual growth. In Virginia alone, data centres consume 26 per cent of total electricity supply. Ireland’s share could reach 32 per cent by year’s end. The grid constraints are real, the permitting delays are real, and the political resistance to building more terrestrial capacity is real.
Advertisement
What is also real, scientists argue, is the physics that makes orbital computing spectacularly difficult at any meaningful scale. The most fundamental challenge is heat. In space, there is no air to carry heat away from processors, only radiative cooling, which requires vast surface areas. Dissipating just one megawatt of thermal energy while keeping electronics at a stable 20 degrees Celsius demands approximately 1,200 square metres of radiator, roughly four tennis courts. A several-hundred-megawatt data centre, the minimum threshold for commercial relevance, would require radiators thousands of times larger than anything ever deployed on the International Space Station.
Radiation presents the second structural problem. Low Earth orbit exposes unshielded chips to cosmic rays and trapped particles that induce bit flips and permanent circuit damage. Radiation hardening adds 30 to 50 per cent to hardware costs and reduces performance by 20 to 30 per cent. The alternative, triple modular redundancy, means launching three copies of every chip, three times the cooling, three times the electricity, and three times the mass. Starcloud’s approach of flying commercial GPUs with external shielding is an interesting experiment, but no one has demonstrated that it works at scale or over hardware lifetimes measured in years rather than months.
Latency is the third constraint. A million satellites spread across orbital shells from 500 to 2,000 kilometres cannot achieve the tight coupling required for frontier model training, where inter-node communication latencies must remain in the microsecond range. Low Earth orbit introduces minimum latencies of several milliseconds for inter-satellite links and 60 to 190 milliseconds for ground-to-orbit round trips, compared to 10 to 50 milliseconds for terrestrial content delivery networks. That makes orbital infrastructure potentially viable for inference workloads, not for training, which is where the overwhelming majority of AI compute demand currently sits.
Then there is cost. IEEE Spectrum estimated that a one-gigawatt orbital data centre would cost upwards of $50 billion, roughly three times the cost of an equivalent terrestrial facility including five years of operation. Google has said that launch costs must fall to under $200 per kilogram before space-based computing begins to make economic sense. SpaceX’s current Starlink economics operate at roughly $1,000 to $2,000 per kilogram. Some analysts argue the true threshold for competing with terrestrial refresh economics is $20 to $30 per kilogram, a figure no credible projection places within the next two decades. The economics look even less favourable when set against thedeep-tech funding landscape on the ground, where terrestrial infrastructure projects can draw on established supply chains and proven unit economics.
Advertisement
Even OpenAI’s Sam Altman, who explored a multibillion-dollar investment in rocket maker Stoke Space as a potential SpaceX competitor for orbital data centres, has publicly called the concept “ridiculous” for the current decade. Altman told journalists that the rough maths of launch costs relative to terrestrial power costs simply does not work yet, and he pointedly asked how anyone plans to fix a broken GPU in space.
The astronomical community adds a separate objection entirely. The vast majority of the roughly 1,000 public comments on SpaceX’s FCC filing urged the commission not to proceed. If approved, the constellation would place more satellites than visible stars in the sky for large portions of the night throughout the year,further militarising and commercialising an orbital environmentthat is already straining under the weight of existing megaconstellations.
None of this means orbital data centres will never exist. SpaceX’s Starship, if it achieves its cost targets, could fundamentally change the mass-to-orbit economics that currently make the concept unworkable. Starcloud’s incremental approach of flying small payloads and iterating on radiation performance is the kind of engineering pathway that occasionally produces breakthroughs. And the terrestrial grid constraints driving the interest are not going away.
But the gap between filing an FCC application for a million satellites and actually making orbital computation economically competitive with a warehouse full of GPUs in Iowa is not measured in years. It is measured in physics problems thatthe current pace of AI infrastructure investmentcannot shortcut, no matter how many billionaires are willing to try. The question scientists are asking is not whether space data centres are theoretically possible. It is why, given the magnitude of the unsolved engineering, anyone is treating them as a near-term solution to a problem that requires near-term answers. The sky, it turns out, is not the limit. The radiator is.
Countries including Nigeria, Laos and New Zealand – and the US state of California – are all piloting their own versions of a digital ID platform, as governments across borders try to bolster security and make administration smoother.
The digital wallet makes up a key part of the Government’s Digital Public Services Plan 2030, which aims to use digital technology to make accessing public services easier and more efficient.
Advertisement
It facilitates identity management that residents should be able to use within the EU to access public and private services. The wallet can be used both offline and online, and will allow users to self-manage how their data is shared.
The ID can help obtain a marriage certificate or register for key welfare supports, and holders can also obtain a digital version of their birth certificates, driving licences and other official documents. The wallet will also be used to verify age on online platforms, amid debates in the region on a ban for social media for those under 16.
It is also expected to reduce the need to repeat the same information to different Government departments and make everyday interactions with state administration more seamless.
The EU mandates that all member states must make a digital wallet available to their citizens by the end of 2026. The Irish wallet will be developed to EU digital identity standards, the Government said.
Advertisement
The digital wallet will “make it simpler for people to verify their identity, apply for supports and access entitlements”, said Minister for Public Expenditure, Infrastructure, Public Service Reform and Digitalisation Jack Chambers, TD.
“The wallet is designed so that all personal data is fully protected, and the user stays in control of what information they put in the wallet and choose to share. Only the details needed for a service will be shared, and nothing more.”
Minister of State at the Department of Public Expenditure, Infrastructure, Public Service Reform and Digitalisation Frank Feighan, TD said that the wallet will be “a crucial element of the Government’s overall portfolio of digital services”.
He added: “It will be able to facilitate secure age verification capability as set out in Digital Ireland and the implementation of the Online Safety Code, under which designated platforms must have age verification measures in place to help protect, in particular, children and young people from online harm.”
Advertisement
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
Convoy co- founder and Microsoft Corporate Vice President Dan Lewis at the Seattle AI Startup Summit on April 2, 2026. (Ken Yeung Photo)
Dan Lewis’ career is hard to summarize in a sentence. He was a product manager at Microsoft, then an early employee at the Seattle AI startup Wavii, which Google later acquired. He made a stop at Amazon before ultimately boomeranging back to Microsoft, where he is today.
At this week’s Seattle AI Startup Summit, it was his experience building Convoy, the one-time unicorn trucking startup that shuttered in 2023, that he wanted to talk about.
But instead of relitigating what led to Convoy’s collapse, Lewis used his time on stage to share lessons to help entrepreneurs build a startup from the ground up.
Be deliberate about culture
Every company develops a culture, whether the founder shapes it or not, Lewis said. “The question is, are you involved in influencing what that is and helping to shape it around something you think aligns with your mission and the people you want in the company?”
Codify values only after you see what’s working
Back when Lewis was at Amazon, he asked then-CEO Jeff Bezos how the leadership principles were derived. Bezos told him that “he started writing stuff down when he first created the company, and then he realized he didn’t quite know what he was doing. So he waited a year to see what was working and what wasn’t, just to get a feel for how things were going.”
Advertisement
Anything that Bezos wanted to keep was codified. Lewis mirrored this approach for Convoy.
Make sure people know why, not just what
Founders shouldn’t have a culture in which workers accept decisions simply because the CEO says so. Lewis called that dynamic “demotivating,” arguing that employees who don’t understand the reasoning behind decisions can’t act independently or feel real ownership. Without that context, he said, people won’t feel like they’re truly part of the company.
Name teams after problems, not solutions
Lewis urged founders to name teams after the customer problems they’re solving, not the products they’re building. He pointed to his time at Amazon, where he built a Q&A tool called “Ask a Question, Get an Answer” for the ratings and reviews team.
The team pushed back: their mandate was to grow ratings and reviews, not launch someone else’s product. Had the team been named around a broader goal like customer or buyer confidence, Lewis said, its members would have been more open to creative approaches rather than feeling like they were “executing somebody else’s plan.”
Advertisement
Innovate deliberately
Invest time and energy into the areas that will really differentiate your company and “give you a chance to win.” Lewis acknowledged that it can feel uncomfortable to copy someone else’s innovations in undifferentiated areas, but sometimes it’s OK, especially when you’re not spending time on things “that don’t matter a lot.”
Storytelling is a startup superpower
Convoy co-founder Dan Lewis discusses the power of storytelling at the 2026 Seattle AI Startup Summit. (Ken Yeung Photo)
Another critical cultural value is the company’s story. Have you crafted a narrative that is interesting, something people can relate to, and want to be a part of?
“Think about for what [you’re] doing, what’s the context in the world?” Lewis said. “What is the opportunity that’s just right there in front of us? What’s the tension point as to why we can’t get that opportunity? What is holding the world back from it, and how are we going to unlock it for everyone so it makes everything better?”
When it came to Convoy, for example, he had his work cut out for him early on trying to sign on new business.“Why would my customer, who’s never worked with a technology company, because they’re shipping freight, want to take a bet?” Lewis explained. “Because they want to be part of the story. It’s interesting.”
Clarify expectations bidirectionally
Trust between founders and employees doesn’t happen by accident. Lewis recommended sitting down — perhaps over a meal — and laying out expectations from both sides before the work begins. It’s a bidirectional process, meaning that both the leader and employee must be heard.
Advertisement
Hire deliberately — and reluctantly
Dan Lewis offers recruiting and hiring insights at the 2026 Seattle AI Startup Summit. (Ken Yeung Photo)
When it comes to hiring, Lewis offered three tips.
First, every company wants team members who want to “show up every day, knock down walls, and make it happen.” But for more established organizations, they also need an additional type of employee, those capable of operating and innovating existing systems. This creates conflict inside a large business, Lewis said, because two cultures can’t live in harmony, nor is it possible to have “two compensation structures that manage the risk-reward.”
He argued that startups have the “pure play” advantage where there’s one culture, one risk-reward trade-off, and founders can focus on the type of person they need. In fact, Lewis thinks 80% of the workforce should possess that “wall-knocker” mentality.
Second, startups must be deliberate in hiring, applying filters to candidates throughout the candidate funnel, and rating how someone introduced themselves, spoke during the first meeting, and followed up. At the end of the process, companies will “only have people that really want to be there and want to be part of this.”
Founders shouldn’t invest a lot of time trying to convince someone to join their company. If they are, “you’re working too hard,” and that it’s “probably not the right sign for a startup.”
Advertisement
Lewis’ last tip: Don’t hire. He admitted that it may sound counterintuitive, but he wants founders to think that every time someone new is onboarded, “it was a failure to operate more efficiently and to innovate” in a way that wouldn’t have required bringing a new person aboard.
Instead, they should first ask whether there was an alternative way to complete the task — perhaps through AI — rather than increasing headcount.
And to be clear, Lewis isn’t advocating for the end of great hiring. Rather, he wants leaders to approach it this way: “Always consider it to be the thing that you wish you didn’t have to do. You wish you could have gotten it done without hiring that person.”
People don’t read instructions
At Convoy, Lewis said, they designed an operations system assuming people would carefully read each other’s notes during multi-day truck jobs with multiple support shifts. Most skipped the notes and started from scratch, irritating customers who had to repeat themselves.
Advertisement
When Lewis asked investor Henry Kravis of KKR for advice, the answer was blunt: “Stop building a system that assumes people are going to read.”
The lesson applies beyond operations. Whether it’s customers, employees, or end users, people scan for a button rather than read text. Founders should design processes and products, especially in the AI era, that work even if nobody reads the instructions.
Use data, and embrace concrete examples
Convoy founder Dan Lewis urges startups to back up data with concrete examples at the 2026 Seattle AI Startup Summit. (Ken Yeung Photo)
One final piece of advice from Lewis: be data-driven. Leave the jargon behind and look to the data when something’s wrong, or there’s confusion, and you’re talking it through with your team or customer.
But also be specific — use clear, concrete examples, along with the exact words customers use, to clarify quickly.
Lewis closed his keynote with a note of humility. None of these lessons came easily, he acknowledged. In fact, many of them weren’t obvious to him until his experience at Convoy forced the issue. The company reached the heights of the startup world before closing its doors, but for entrepreneurs trying to build something that lasts, that hard-won experience may be exactly the point.
Advertisement
His talk kicked off a day of conversation at the second annual Seattle AI Startup Summit, a conference that brings together investors, founders, executives and others.
Before there was pressure-treated wood, before modern paints, there was pine tar. Everything from tool handles to wagons to ships were made of wood preserved with pine tar, once upon a time, and [woodbrew] wants to show you how to make it, how to use it, and why you might put it on your skin.
It starts with, you guessed it, pine! In the first part of the video, [woodbrew] creates a skin salve with pine resin and food-safe oil. The pine resin–which is the sticky goop that dries around wounds on evergreen trees–is highly antiseptic and has been used in wound salves since the stone age. The process is easy: melt it in a double boiler, then mix with equal parts oil. [woodbrew] also adds a touch of beeswax to firm it up, an a little eucalyptus extract for extra germ-killing power, and a nice smell to boot.
That’ll preserve your hands, but what about preserving wood? That starts at about 9 minutes in, and for that you’re going to need a lot more resin, so picking it off wounded trees like he does at the start of the video won’t work. [woodbrew] suggests starting with dead-or-dying pines, and harvesting the crooks of their branches for “fatwood” — wood with the highest resin content. He also suggests the center of stumps, again of trees that died or were severely injured before being cut down. Then it’s a matter of cooking those fine organic molecules out. This is where we burn the wood to save the wood. Well, to save other wood. Wood we didn’t burn, obviously.
The distillation process [woodbrew] uses it fairly traditional, and consists of a couple of buckets. One bucket is buried and collects the pine tar; the other, with holes in the bottom to allow the tar to drip out, is filled with fatwood and covered tightly before being surrounded by firewood which is set alight. You could use an alternate source of heat here, but if you just cut down a pine tree for its fatwood, well, you’d have the rest of the tree to work with. Inside the fatwood bucket, the heat of the fire cooks off the volatile compounds that make pine tar, while the lack of oxygen from being closed up keeps it from burning. Burying the collection bucket keeps it from getting so hot the volatiles all boil off.
Advertisement
If this sounds like the process for making charcoal or woodgas, that’s because it is! He’s letting the gas fraction flare off here, but you could probably capture it– though a true gasifier brakes the tar down into gaseous compounds as well. The charcoal of course stays in the bucket as a bonus.
To make it usable as a wood finish, [woodbrew] mixes his homemade pine tar 50:50 with linseed oil, thining it to a spreadable consistency that helps it penetrate deep into the wood. By filling the voids in the wood, this mixture will help keep moisture out, and the antiseptic properties of the organic soup that is pine tar will help keep fungi at bay for potentially decades to come.
The outside of the iPad hasn’t changed all that much in the few years since it was updated last, with the screen growing a barely noticeable 0.1 inches, and the standard USB-C port and selfie camera, plus Touch ID built into the power button. Most of the changes affect the inside of the tablet, including a major processor upgrade to the A16 chip and storage that mean this tablet is much snappier and more responsive than the 2022 version. There’s twice as much storage, with 128 GB as a baseline and up to 512GB on the upgraded model, so you won’t need to keep deleting apps to make room for more movies.
While it does have the A16 processor, which is also found in the iPhone 14 Pro, iPhone 15, and iPhone 15 Plus, the reduced RAM means there’s no support for Apple Intelligence. Whether that’s a benefit or a drawback will depend on how much you like or dislike AI. Beyond the lack of Apple Intelligence, you’re really only making a compromise when it comes to the screen, which isn’t laminated, so the Apple Pencil doesn’t feel quite as sharp as it does on other iPads, and it isn’t nano-textured, so glare and bright rooms may be more of an issue.
For most folks, the 2025 A16 iPad will be more than enough tablet for streaming, web browsing, and even some light gaming. You can head over to Amazon to pick up the iPad in either Silver or Blue at the discounted $300 price, with similar discounts on the 256GB and 512GB models too, but availability by color varies as you climb up the storage ladder. If you’re interested in what the other, more premium iPads offer, make sure to check out our guide that covers the entire lineup.
This year marks the 80th anniversary of ENIAC, the first general-purpose digital computer. The computer was built during World War II to speed up ballistics calculations, but its contributions to computing extend well beyond military applications.
Two of ENIAC’s key architects—John W. Mauchly, its co-inventor, and Kathleen “Kay” McNulty, one of the six original programmers—married a few years after its completion and raised seven children together. Mauchly and McNulty’s grandchild Naomi Most delivered a talk as part of a celebration in honor of ENIAC’s anniversary on 15 February, which was held online and in-person at the American Helicopter Museum in West Chester, Pa. The following is adapted from that presentation.
There was a library at my grandparents’ farmhouse that felt like it went on forever. September light through the windows, beech leaves rustling outside on the stone porch, the sounds of cousins and aunts and uncles somewhere in the house. And in the corner of that library, an IBM personal computer.
Advertisement
When I spent summers there as a child, I didn’t yet know that the computer was closely tied to my family’s story.
My grandparents are known for their contributions to creating the Electronic Numerical Integrator and Computer, or ENIAC. But both were interested in more than just crunching numbers: My grandfather wanted to predict the weather. My grandmother wanted to be a good storyteller.
In Irish, the first language my grandmother Kathleen “Kay” McNulty ever spoke, a word existed to describe both of these impulses: ríomh.
I began to learn the Irish language myself five years ago, and I was struck by how certain words and phrases had multiple meanings. According to renowned Irish cultural historian Manchán Magan—from whom I took lessons—the word ríomh has at different times been used to mean to compute, but also to weave, to narrate, or to compose a poem. That one word that can tell the story of ENIAC, a machine with wires woven like thread that was built to compute, make predictions, and search for a signal in the noise.
Advertisement
John Mauchly’s Weather-Prediction Ambitions
Before working on ENIAC, John Mauchly spent years collecting rainfall data across the United States. His favorite pastime was meteorology, and he wanted to find patterns in storm systems to predict the weather.
The Army, however, funded ENIAC to make simpler predictions: calculating ballistic trajectory tables. Start there, co-inventors J. Presper Eckert and Mauchly realized, and perhaps the weather would soon be computable.
Co-inventors John Mauchly [left] and J. Presper Eckert look at a portion of ENIAC on 25 November 1966. Hulton Archive/Getty Images
Weather is a system unfolding through time, and a model of a storm is a story about how that system might unfold. There’s an old Irish saying related to this idea: Is maith an scéalaí an aimsir. Literally, “weather is a good storyteller.” But aimsir also means time. So the usual translation of this phrase into English becomes “time will tell.”
Mauchly wanted to ríomh an aimsire—to weave the weather into pattern, to compute the storm, to narrate the chaos. He realized that complex systems don’t reveal their full purpose at conception. They reveal it through aimsir—through weather, through time, through use.
Advertisement
ENIAC’s First Programmers Were Weavers
Kathleen “Kay” McNulty was born on 12 February 1921, in Creeslough, Ireland, on the night her father—an IRA training officer—was arrested and imprisoned in Derry Gaol.
Family oral history holds that her people were weavers. She spoke only Irish until her family reached Philadelphia when she was 4 years old, entering American school the following year knowing virtually no English. She graduated in 1942 from Chestnut Hill College with a mathematics degree, was recruited to compute artillery firing tables by hand for the U.S. Army, and was then selected—along with five other women—to program ENIAC.
They had no manual. They had only blueprints.
McNulty and her colleagues learned ENIAC and its quirks the way you learn a loom: by touch, by memory, by routing threads of electricity into patterns. They developed embodied knowledge the designers could only approximate. They could narrow a malfunction to a specific failed vacuum tube before any technician could locate it.
Advertisement
McNulty and Mauchly are also credited with conceiving the subroutine, the sequence of instructions that can be repeatedly recalled to perform a task, now essential in any programming. The subroutine was not in ENIAC’s blueprints, nor in the funding proposal. The concept emerged as highly determined people extended their imagination into the machine’s affordances.
The engineers designed the loom. Weavers discovered its true capabilities.
In 1950, four years after ENIAC was switched on, Mauchly’s dream was realized as it was used in the world’s first computer-assisted weather forecast. That was made possible after Klara von Neumann and Nick Metropolis reassembled and upgraded the ENIAC with a small amount of digital program memory. The programmers who transformed the math into operational code for the ENIAC were Norma Gilbarg, Ellen-Kristine Eliassen, and Margaret Smagorinsky. Their names are not as well-known as they should be.
Before programming ENIAC, Kay McNulty [left] was recruited by the U.S. Army to compute artillery firing tables. Here, she and two other women, Alyse Snyder [center] and Sis Stump, operate a mechanical analog computer designed to solve differential equations in the basement of the University of Pennsylvania’s Moore School of Electrical Engineering.University of Pennsylvania
Kay McNulty, Family Storyteller
Kay married John Mauchly in 1948, describing him as “the greatest delight of my life. He was so intelligent and had so many ideas…. He was not only lovable, he was loving.” She spent the rest of her life ensuring he, Eckert, and the ENIAC programmers would be recognized.
Advertisement
When she died in 2006, I came to her funeral in shock, not fully knowing what I’d lost. As she drifted away, it was said, she had been reciting her prayers in Irish. This understanding made it quickly over to Creeslough, in County Donegal, and awaited me when I visited to honor her memory with the dedication of a plaque right there in the center of town.
In her own memoir, she wrote: “If I am remembered at all, I would like to be remembered as my family storyteller.”
In Irish, the word for computer is ríomhaire. One who ríomhs. One who weaves, computes, and tells. My grandfather wanted to tell the story of the weather through computing. My grandmother wanted to be remembered as a storyteller. The language of her childhood already had a word that contained both of those ambitions.
Computers as Narrative Engines
When it was built, ENIAC looked like the back room of a textile production house. Panels. Switchboards. A room full of wires. Thread.
Advertisement
Thread does not tell you what it will become. We tend to think of computing as calculation—discrete and deterministic. But a model is a structured story about how something behaves.
Weather models, ballistic tables, economic forecasts, neural networks: These are all narrative engines, systems that take raw inputs and produce accounts of how the world might unfold. In complex systems, when parts are woven together through use, new structures arise that no one specified in advance.
Like ENIAC, the machines we are building now—the large models, the autonomous systems—are not merely calculators. They are looms.
Their most important properties will not be specified in advance. They will emerge through use, through the people who learn how to weave with them.
Who doesn’t love a good round of FOMO? From dot-com to Web 2.0, virtual reality to blockchain, the tech industry has had its share of being too afraid to miss out on a trend.
The AI bubble is the big daddy of them all. Its first offspring — the rush to lock down power for data centers — is now begetting a mad dash to secure natural gas supplies and equipment. If FOMOs could have babies, then the AI bubble is already having grandkids.
Microsoft said on Tuesday that it’s working with Chevron and Engine No. 1 to build a natural gas power plant in West Texas that could grow to produce 5 gigawatts of electricity. This week Google confirmed that it’s working with Crusoe to build a 933 MW natural gas power plant in North Texas. And last week, Meta announced that it was adding another seven natural gas power plants to its Hyperion data center in Louisiana, bringing the site to 7.46 GW of capacity — enough to power the entire state of South Dakota.
Are we missing anyone?
Advertisement
The recent investments are concentrated in the southern U.S., home to some of the largest natural gas deposits in the world. Recently, the U.S. Geological Survey estimated that there’s enough in one region to supply energy to the entire United States for 10 months by itself. Every data center operator seems to want a part of it.
The scramble for natural gas has led to a shortage of turbines for the power plants, with prices likely to rise 195% by the end of this year relative to 2019 prices, according to Wood Mackenzie. The equipment contributes 20% to 30% of the cost of a power plant. Companies won’t be able to place new orders until 2028, and it’s taking six years to get turbines delivered, the consultancy notes.
That means tech companies are betting that the AI fever won’t break, that AI will continue to need exponential amounts of power, and that natural gas generation will be necessary for success in the AI era.
Techcrunch event
Advertisement
San Francisco, CA | October 13-15, 2026
They may come to regret that third assumption.
Advertisement
Though natural gas supplies in the U.S. are plentiful, and because shipping the fuel isn’t cheap, the country remains somewhat insulated from the turmoil in the Middle East. But supplies aren’t unlimited, and recently, growth in production in the big three regions — responsible for three-quarters of all U.S. shale gas production — has slowed considerably.
It’s not clear how insulated tech companies are from price swings since none of them have disclosed specific terms of their agreements. A lot will depend on how firm the price is in those contracts.
Even if the contracted prices are as firm as can be, the companies could still face repercussions.
Because natural gas generates about 40% of the electricity in the U.S., according to the Energy Information Administration, electricity prices are closely tied to natural gas prices. Tech companies might be able to shield themselves from scrutiny for a bit by moving their gas power plants behind the meter — by skipping the grid and connecting them directly to their data centers. But natural gas isn’t an unlimited resource, and if their ambitions grow too big, even the behind-the-meter operations could drive up power prices for everyone. We’ve all seen how that’s played out.
Advertisement
It won’t just be regular households getting upset either. Other industries, including those that remain much more dependent on natural gas and can’t yet turn to renewables, might balk at data centers grabbing so much of the resource. Powering a data center with wind, solar, and batteries is easy. Running a petrochemical plant? Not so much.
Then there’s the weather. One cold winter could change the calculus by driving up demand among households. Wellheads might freeze off, crimping supplies dramatically, as happened in Texas in 2021. When gas runs short, suppliers will face a choice: keep the AI data centers running or let people heat their homes?
By snapping up natural gas supplies and moving behind-the-meter, tech companies can claim that they’re “bringing their own power” and not straining the electrical grid. But in reality, they’re just shifting their use from one grid to another, the natural gas grid. The AI rush has illustrated just how physically constrained the digital world remains. Does it make sense for them to bet big on a finite resource? Tech companies might regret falling for the FOMO.
You must be logged in to post a comment Login