Connect with us
DAPA Banner

Tech

Apple could soon launch a clamshell-style foldable iPhone to rival Samsung’s Flip

Published

on

The year 2026 is going to be a stacked one for Apple’s portfolio, but one of the most eagerly-awaited products is the upcoming foldable iPhone. Heavily rumored to debut in the Fall season this year alongside the iPhone 18 series, it would mark the first time that Apple is entering the foldable phone segment, one where Samsung currently reigns supreme.

A surprise shift

But it seems Apple is working on an even more ambitious idea, one that is more pocketable, and possibly, more pocket-friendly, as well. We’re talking about a clamshell-style foldable iPhone, one that would challenge the likes of Samsung Galaxy Z Flip 7 and Motorola’s Razr series.

“Now there’s another foldable device under consideration inside Apple labs (and it won’t come as a shock given what Motorola and Samsung Electronics Co. have already done): a square, clamshell-style foldable phone,” says a report by Bloomberg. This is the first time we are hearing about such a device, though the outlet warns that it could be canned or delayed.

So far, Samsung, Motorola, and Chinese labels such as Honor and Xiaomi have put clamshell-style foldable phones on the shelves. Reeling under the hot competition, Samsung even added a more affordable “Fan Edition” aka FE model to its Galaxy Z Flip series.

What to expect?

Motorola, on the other hand, offers its clamshell foldable phones in multiple flavors across different price points. Huawei, on the other hand, has even experimented a vertically-folding, yet pretty pocketable, format called the Pura X.

Advertisement

Now, the report of Apple potentially making a clamshell-style foldable phone is going to send shockwaves through the market. However, a lot hinges on the success of Apple’s initial book-style foldable phone set to arrive later this year.

Apple is already said to have solved the crease problem on its upcoming phone, and it would be interesting to see whether that display innovation eventually appears on a clamshell-type folding iPhone, too.

When exactly does the “iPhone Flip” come out? That remains unknown as the project is still in the early stages of development. But it certainly looks like Apple’s hardware team is going back to its experimenting ways.

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

A man allegedly threw a Molotov cocktail at Sam Altman’s house

Published

on

A 20-year-old man was arrested by the San Francisco Police Department after allegedly throwing a Molotov cocktail at OpenAI CEO Sam Altman’s house, The New York Times reports.

In a statement shared on X, SFPD wrote that it responded to a request for a fire investigation in the North Beach neighborhood of San Francisco around 7:12 AM ET / 4:12AM PT. “At the scene, officers learned that an unknown male subject threw an incendiary destructive device at a home, causing a fire at an exterior gate.” After the man fled on foot, police found and arrested him around an hour later while responding to a business’ complaint about an “unknown male subject threatening to burn down the building.” That business turned out to be OpenAI’s headquarters and the subject happened to be the same man who threw the Molotov at Altman’s house.

“Early this morning, someone threw a Molotov cocktail at Sam Altman’s home and also made threats at our San Francisco headquarters. Thankfully, no one was hurt,” an OpenAI spokesperson confirmed in a statement to Wired. “We deeply appreciate how quickly SFPD responded and the support from the city in helping keep our employees safe. The individual is in custody, and we’re assisting law enforcement with their investigation.”

As it’s become more commonplace, artificial intelligence has also become more divisive. While more and more people continue to use AI tools, public reaction to the encroachment of the technology, whether in gaming or customer service, is increasingly negative. Altman’s warnings of AI’s impact on employment, and a recent New Yorker investigation digging into his allegedly manipulative leadership style at OpenAI, have also raised questions about the CEO’s prominent role as a steward of the technology.

Advertisement

Source link

Continue Reading

Tech

Microsoft Begins Removing Copilot Branding From Windows 11 Apps

Published

on

Microsoft has started stripping Copilot branding out of Notepad in Windows 11, replacing the old Copilot menu with a more generic “writing tools” label. The AI features themselves aren’t going away, but Microsoft seems to be backing off the heavy-handed Copilot branding and extra entry points. Windows Central reports: As promised, Microsoft is now beginning its effort to reduce and remove Copilot branding across Windows 11, with the latest Notepad update for Insiders outright removing the Copilot icon and phrasing. Now, the AI menu is simply called “writing tools,” and maintains the same functionality as before. Additionally, Microsoft has also removed references to AI in the Settings area in Notepad. Now, the ability to turn on or off these AI powered writing tools are now listed under “Advanced features.”

This change is present in the latest preview build of Notepad which is now rolling out to all Windows Insiders. The app version is 11.2512.28.0, and you’ll know you have it if you see the Copilot icon replaced with a pen icon instead. […] For Notepad, it appears Microsoft has opted to replace the Copilot menu with something more generic. It’s still the same functionally, but it’s no longer leaning on the tainted Copilot brand. Of course, you can still easily turn off all AI features in Notepad if you don’t want them. The Verge reports that the “unnecessary Copilot buttons” are also disappearing from the Snipping Tool, Photos, and Widgets.

Source link

Continue Reading

Tech

SiFive raises $400m Series G at $3.65bn valuation in final round before IPO

Published

on

In short: SiFive, the RISC-V chip IP firm founded by the Berkeley engineers who created the open-source instruction set architecture, raised $400 million in an oversubscribed Series G on April 9, 2026, at a valuation of $3.65 billion. The round was led by Atreides Management and backed by Nvidia, Apollo Global Management, D1 Capital Partners, Point72 Turion, T. Rowe Price Investment Management, Capital Group, Prosperity7 Ventures, and Sutter Hill Ventures. CEO Patrick Little described it as the company’s final private funding round before an initial public offering.

Open source, closed competition

RISC-V (pronounced “risk five”) is an open-source instruction set architecture, the foundational specification governing how a processor interprets and executes instructions, developed at the University of California, Berkeley, from 2010 onwards. Unlike the proprietary architectures maintained by Arm Holdings and Intel, RISC-V is free to implement, extend, and commercialise without per-unit royalties or usage restrictions. SiFive was founded in 2015 by three of the project’s principal architects: Krste Asanović, Andrew Waterman, and Yunsup Lee, working alongside David Patterson, a Turing Award winner and co-author of the standard text on computer architecture. The company’s business model is structurally similar to Arm’s: it designs CPU intellectual property and licences that IP to customers who integrate it into their own silicon, rather than fabricating chips itself. The critical difference is that SiFive’s designs sit on an architecture that no single company controls.

That independence became more commercially valuable in March 2026, when Arm launched its AGI CPU, its first in-house silicon product in its 35-year history, with Meta and OpenAI as debut customers. The move repositioned Arm from a neutral IP licensor into a company with direct hardware ambitions, creating the kind of vertical conflict that has historically pushed technology buyers toward open-standard alternatives, and generating fresh urgency for a competitor that owes no allegiance to any proprietary architecture owner. Intel attempted a different route into the space: in 2021 the chipmaker offered more than $2 billion to acquire SiFive outright, a deal that collapsed over valuation disagreements. Intel has since joined Elon Musk’s Terafab as a foundry partner in April 2026, committing its 18A process node to a $25 billion AI compute facility backed by Tesla, SpaceX, and xAI, a strategic reorientation that leaves the RISC-V IP licensing position without Intel as a would-be acquirer or rival.

The Series G: who invested, and why

The $400 million Series G was led by Atreides Management, a Boston-based investment firm managed by Gavin Baker, who built his reputation running Fidelity’s OTC Portfolio before founding Atreides in 2019. New participants include Nvidia, Apollo Global Management, D1 Capital Partners, Point72 Turion, and T. Rowe Price Investment Management. Existing shareholders Prosperity7 Ventures, Capital Group, and Sutter Hill Ventures also participated. The round closed oversubscribed and lifts SiFive’s total valuation to $3.65 billion, up from the $2.5 billion set at the Series F in March 2022. Nvidia’s presence on the cap table is a technical statement as well as a financial one: in January 2026 SiFive announced it is integrating NVLink Fusion into its high-performance data centre platform, enabling RISC-V-based CPUs to connect directly to Nvidia GPUs via a coherent, high-bandwidth interconnect that reduces latency and improves system utilisation for large-scale AI inference. That compatibility positions SiFive’s CPU IP to work alongside the Vera Rubin platform Nvidia announced at GTC 2026, the company’s next-generation GPU architecture targeting agentic AI workloads.

Advertisement

The broader investment context is one of accelerating hyperscale demand for custom silicon. Amazon committed $50 billion to its Trainium chip programme in its April 2026 shareholder letter, positioning in-house AI silicon as a strategic infrastructure necessity rather than an optional enhancement. The deal between Google, Anthropic, and Broadcom for custom AI compute represents a parallel approach, using purpose-built ASICs to reduce dependence on commodity processors across hyperscale inference workloads. SiFive’s pitch is that it offers hyperscale customers a third path: RISC-V CPU IP that is fully customisable, architecturally independent, and built on an open standard that no single acquirer can lock down. “Hyperscale customers have made it very clear that it is time to accelerate the availability of open standard alternatives for the data centre,” said CEO Patrick Little. “Their consistent ask is for customisable CPU solutions in IP form, that will enable them to meaningfully differentiate their data centre compute solutions.

What the capital will build

SiFive has outlined three areas of deployment for the Series G capital. Advanced research and development takes the largest share, focused on expanding the roadmap of high-performance scalar, vector, and matrix RISC-V CPU IP, accelerator cores, and system IP targeting data centre deployments. A second allocation covers software ecosystem development, including existing efforts to port CUDA, Red Hat Enterprise Linux, and Ubuntu to RISC-V, work that is critical to making the architecture practically deployable in production data centres where software compatibility is as important as raw performance. The third allocation supports customer enablement: the direct engineering collaboration that helps hyperscale clients and system vendors integrate SiFive IP into their own silicon programmes. Little framed the company’s open-standard positioning as a structural advantage that compounds over time: “RISC-V was created by our founders to be similar to other open standards, driven and continually improved by collaboration and cross-pollination across a broad community of innovators. This ensures choice and flexibility for customers, and ultimately benefits consumers.” He argued that the market is becoming more receptive to open-standard alternatives precisely as Arm moves further into selling its own branded hardware.

Ten billion cores and the IPO signal

SiFive reported record growth in 2025, with its IP featured in more than 500 semiconductor designs and more than 10 billion RISC-V cores shipped to date across consumer electronics, automotive systems, and data centre processors. The company has framed the data centre segment as a potential $100 billion-plus addressable market, driven by the agentic AI infrastructure buildout that has prompted every major hyperscaler to commit tens of billions of dollars annually to compute expansion. Patrick Little told Reuters that the April 2026 fundraise is the company’s final private round before an IPO, though no exchange or pricing timeline has been confirmed. The signal carries weight: a valuation of $3.65 billion and a roster of investors that includes a major GPU manufacturer, a bulge-bracket alternative asset manager, and two prominent long-only asset managers suggests SiFive is preparing for the kind of institutional scrutiny that accompanies a public filing. As AI chip investment reached record levels in 2025, with capital flowing to custom silicon programmes at every major cloud provider, SiFive’s timing places it squarely at the centre of a market transition it has been building toward for a decade.

Advertisement

Source link

Continue Reading

Tech

Encrypted Emails Are Now Available for Some Gmail Phone App Enterprise Customers

Published

on

We all love encryption. If you use Gmail in an enterprise setting, especially if your work includes sensitive information, you probably love it even more. Certain Gmail app users on iOS and Android phones can now send and receive encrypted emails within the app itself — no add-ons necessary.

Previously, Gmail users could only send emails via end-to-end encryption (E2EE) on their desktops. Google’s announcement said there is “no need to download extra apps or use mail portals.” Customers can simply compose and read encrypted emails on the Gmail app itself on their iOS and Android phones.

Advertisement
A screen capture of a Gmail email on a mobile device. The options for encryption are shown at the bottom of the screen, with Additional Encryption toggled on.

An example of an encrypted email in the Gmail app.

Google

But not all Gmail consumers will be able to use the new feature. It’s only available for Enterprise Plus subscribers with the Assured Controls or Assured Controls Plus add-on. Enterprise Plus is a subscription plan, one of several within Google Workspace. Plus is intended for large businesses and other organizations and offers higher data security and client-side encryption, which the less expensive Enterprise Standard lacks.

Assured Controls and Assured Controls Plus are designed to increase digital sovereignty, data residency and compliance.

More from ZDNETThe Best Email Encryption Software of 2026: Expert Tested

Advertisement

Google said the feature is designed to allow users to “engage with your organization’s most sensitive data from anywhere on their mobile devices while ensuring data remains compliant.”

With the new feature, Gmail app users can send encrypted emails to anyone, even if they aren’t using Gmail. If the recipient is using the Gmail app, the encrypted email will appear like any other email in their inbox. If the recipient is not using the Gmail app, they can still read the encrypted email and reply to it on their own browser — with the entire conversation remaining encrypted.

A screen capture from a mobile device of an email sent from Gmail to a non-Gmail address.

An example of an email from a Gmail app consumer sent to a recipient without the Gmail app.

Advertisement

Google

For example, say a Gmail app customer sends an encrypted message to someone using an iPhone with the native iPhone email app. That person using the iPhone will still be able to read the encrypted email and then answer back with an encrypted message.

Enterprise Plus customers can use the new feature now, whether they are on either the Rapid Release or Scheduled Release domains. To encrypt an email, click the lock icon and select additional encryption. Then create your message.

Business and organization administrators must enable the Android and iOS clients in the CSE admin interface in the Admin Console to grant access to their Gmail users.

Proton is an alternative for businesses and consumers

Proton Workspace, an enterprise solution that launched last month, also has end-to-end email encryption but with the added benefit of being based in Europe (Switzerland), which does have to comply with the US CLOUD Act and, thus, hand over data to the US government.

Advertisement

For the everyday consumer, Proton Mail has end-to-end email encryption and is available for free or in paid plans, some of which include bundled privacy and security apps, like a VPN and a password manager.

Source link

Advertisement
Continue Reading

Tech

Mems Photonics Chip Shrinks Quantum Computer Control Limits

Published

on

By many estimates, quantum computers will need millions of qubits to realize their potential applications in cybersecurity, drug development, and other industries. The problem is, anyone who has wanted to simultaneously control millions of a certain kind of qubits has run into the problem of trying to control millions of laser beams.

That’s exactly the challenge that was faced by scientists working on the MITRE Quantum Moonshot project, which brought together scientists from MITRE, MIT, the University of Colorado at Boulder, and Sandia National Laboratories. The solution they developed came in the form of an image projection technology that they realized could also be the fix for a host of other challenges in augmented reality, biomedical imaging, and elsewhere. The device is a one-square-millimeter photonic chip capable of projecting the Mona Lisa onto an area smaller than the size of two human egg cells.

“When we started, we certainly never would have anticipated that we would be making a technology that might revolutionize imaging,” says Matt Eichenfield, one of the leaders of the Quantum Moonshot project, a collaborative research effort focused on developing a scalable diamond-based quantum computer, and a professor of quantum engineering at the University of Colorado at Boulder. Each second, their chip is capable of projecting 68.6 million individual spots of light—called scannable pixels to differentiate them from physical pixels. That’s more than fifty times the capability of previous technology, such as micro-electromechanical systems (MEMS) micromirror arrays.

“We have now made a scannable pixel that is at the absolute limit of what diffraction allows,” says Henry Wen, a visiting researcher at MIT and a photonics engineer at QuEra Computing.

Advertisement

The chip’s distinguishing feature is an array of tiny micro-scale cantilevers, which curve away from the plane of the chip in response to voltage and act as miniature “ski-jumps” for light. Light is channeled along the length of each cantilever via a waveguide, and exits at its tip. The cantilevers contain a thin layer of aluminum nitride, a piezoelectric which expands or contracts under voltage, thus moving the micromachine up and down and enabling the array to scan beams of light over a two-dimensional area.

Despite the magnitude of the team’s achievement, Eichenfield says that the process of engineering the cantilevers was “pretty smooth.” Each cantilever is composed of a stack of several submicrometer layers of material and curls approximately 90 degrees out of the plane at rest. To achieve such a high curvature, the team took advantage of differences in the contraction and expansion of individual layers caused by physical stresses in the material resulting from the fabrication process. The materials are first deposited flat onto the chip. Then, a layer in the chip below the cantilever is removed, allowing the material stresses to take effect, releasing the cantilever from the chip and allowing it to curl out. The top layer of each cantilever also features a series of silicon dioxide bars running perpendicular to the waveguide, which keep the cantilever from curling along its width while also improving its length-wise curvature.

A micro-cantilever wiggles and waggles to project light in the right place.Matt Saha, Y. Henry Wen, et al.

What was more of a challenge than engineering the chip itself was figuring out the details of actually making the chip project images and videos. Working out the process of synchronizing and timing the cantilevers’ motion and light beams to generate the right colors at the right time was a substantial effort, according to Andy Greenspon, a researcher at MITRE who also worked on the project. Now, the team has successfully projected a variety of videos from a single cantilever, including clips from the movie A Charlie Brown Christmas.

Advertisement

A warped projection of the Mona Lisa. The chip projected a roughly 125-micrometer image of the Mona Lisa.Matt Saha, Y. Henry Wen, et al.

Because the chip can project so many more spots in any given time interval than any previous beam scanners, it could also be used to control many more qubits in quantum computers. The Quantum Moonshot program’s mission is to build a quantum computer that can be scaled to millions of qubits. So clearly, it needs a scalable way of controlling each one, explains Wen. Instead of using one laser per qubit, the team realized that not every qubit needed to be controlled at every given moment. The chip’s ability to move light beams over a two-dimensional area, would allow them to control all of the qubits with many fewer lasers.

Another process that Wen thinks the chip could improve is scanning objects for 3D printing. Today, that typically involves using a single laser to scan over the entire surface of an object. The new chip, however, could potentially employ thousands of laser beams. “I think now you can take a process that would have taken hours and maybe bring it down to minutes,” says Wen.

Wen is also excited to explore the potential of different cantilever shapes. By changing the orientations of the bars perpendicular to the waveguide, the team has been able to make the cantilevers curl into helixes. Wen says that such unusual shapes could be useful in making a lab-on-a-chip for cell biology or drug development. “A lot of this stuff is imaging, scanning a laser across something, either to image it or to stimulate some response. And so we could have one of these ski jumps curl not just up, but actually curl back around, and then move around and scan over a sample,” Wen explains. “If you can imagine a structure that will be useful for you, we should try it.”

From Your Site Articles

Advertisement

Related Articles Around the Web

Source link

Continue Reading

Tech

Nearly 4,000 US industrial devices exposed to Iranian cyberattacks

Published

on

Hacker

The attack surface targeted by Iranian-linked hackers in cyberattacks against U.S. critical infrastructure networks includes thousands of Internet-exposed programmable logic controllers (PLCs) manufactured by Rockwell Automation.

According to a joint advisory issued by multiple U.S. federal agencies on Tuesday, Iranian state-backed hacking groups have been targeting Rockwell Automation/Allen-Bradley PLC devices since March 2026, causing operational disruptions and financial losses.

“Iranian-affiliated APT targeting campaigns against U.S. organizations have recently escalated, likely in response to hostilities between Iran, and the United States and Israel,” the authoring agencies warned.

Wiz

“The FBI identified that this activity resulted in the extraction of the device’s project file and data manipulation on HMI and SCADA displays.”

As cybersecurity firm Censys reported one day later, three-quarters of more than 5,200 such industrial control systems found exposed online globally are from the United States.

Advertisement

“Censys data identifies 5,219 internet-exposed hosts globally responding to EtherNet/IP (EIP) and self-identifying as Rockwell Automation/Allen-Bradley devices,” Censys said.

“The United States accounts for 74.6% of global exposure (3,891 hosts), with a disproportionate share on cellular carrier ASNs indicative of field-deployed devices on cellular modems.”

Internet exposed Rockwell/Allen Bradley PLCs
Internet-exposed Rockwell/Allen Bradley PLCs (Censys)

​To defend against these ongoing attacks, network defenders are advised to secure PLCs using a firewall or disconnect them from the Internet, scan logs for signs of malicious activity, and check for suspicious traffic on OT ports (especially when it originates from overseas hosting providers).

Admins should also enforce multifactor authentication (MFA) for access to OT networks, keep all PLC devices up to date, and disable unused services and authentication methods.

This ongoing campaign follows similar attacks from nearly three years ago, when a threat group affiliated with the Iranian Government’s Islamic Revolutionary Guard Corps (IRGC) and tracked as CyberAv3ngers targeted vulnerabilities in U.S.-based Unitronics operational technology (OT) systems.

Advertisement

CyberAv3ngers hackers compromised at least 75 Unitronics PLC devices in multiple waves of cyberattacks between November 2023 and January 2024, with half of those in Water and Wastewater Systems critical infrastructure networks across the United States.

More recently, the Handala hacktivist group (linked to Iran’s Ministry of Intelligence and Security) wiped approximately 80,000 devices from the network of U.S. medical giant Stryker, including employees’ mobile devices and company-managed personal computers.

Automated pentesting proves the path exists. BAS proves whether your controls stop it. Most teams run one without the other.

This whitepaper maps six validation surfaces, shows where coverage ends, and provides practitioners with three diagnostic questions for any tool evaluation.

Source link

Advertisement
Continue Reading

Tech

Weakest Engineer In the Room: Turn Fear Into Fuel

Published

on

This article is crossposted from IEEE Spectrum’s careers newsletter. Sign up now to get insider tips, expert advice, and practical strategies, written in partnership with tech career development company Parsity and delivered to your inbox for free!

The Worst Engineer in the Room

My salary doubled. My confidence tanked.

That’s what happened when I had just joined a five-person startup in San Francisco in my third year as a software engineer. Two of the founders had been recognized in Forbes 30 Under 30. The team was exceptional by any measure.

On my first day, someone made a joke about Dijkstra’s algorithm. Everyone laughed. I smiled along, then looked it up afterward so I could understand why it was funny. Dijkstra’s algorithm finds the shortest path between 2 points—the math underlying GPS navigation. It’s a foundational concept in virtually every formal computer science curriculum. I had never encountered it.

Advertisement

That moment reflected a broader pattern. Conversations about system design and tradeoffs often felt just out of reach. I could follow parts of them, but not enough to contribute meaningfully.

I was mostly self-taught. Wide coverage, shallow roots. The engineers around me had roots. You could feel it in how they reasoned through problems, how they talked about tradeoffs, how they debugged with patience instead of pure panic.

The Advice That Sounds Good Until You’re Living It

You’ve heard the phrase: “If you’re the smartest person in the room, you’re in the wrong room.”

It sounds aspirational. What nobody tells you is what it actually feels like to be in that room. It feels like barely following system design conversations. Like nodding along to discussions you can only partially decode. Like shipping solutions through trial and error and hoping nobody looks too closely.

Advertisement

Being the weakest engineer in the room is genuinely uncomfortable. It surfaces every gap. And if you’re not careful, it pushes you in exactly the wrong direction.

My instinct was to make myself smaller. On a team of five, every voice mattered. I stopped offering mine. I rushed toward working solutions without real understanding, hoping velocity would compensate for depth.

I was working harder and, at the same time, I was not improving.

The turning point came when one of the most senior engineers left. Before departing, he told me it was difficult to work with me because I lacked foundational programming knowledge, listing out the concepts he saw me struggle with.

Advertisement

For the first time, what had felt like vague inadequacy became something specific.

What the Cliché Misses

Proximity to stronger engineers is not sufficient on its own. You won’t absorb their skill through osmosis. The engineers who thrive when they’re outmatched are not the ones who wait for confidence to arrive. They treat the discomfort as diagnostic information.

What can they answer that I can’t? What do they see in a system that I’m missing?

I defined a clear picture of the engineer I wanted to become and compared it to where I was. I wrote down what I did not know. I identified how I would close each gap with books, tutorials and small projects. I asked for recommendations from the same engineer who gave me the hard feedback.

Advertisement

I figured out the gaps. Then the bridges. Then I worked through each of them.

Over time, conversations became clearer. Debugging became more systematic. I started contributing meaningfully rather than just executing tasks.

The Other Room Nobody Warns You About

There’s a less-obvious version of this same problem: when you’re the strongest engineer in the room.

It can feel rewarding. Less friction, more validation. But there’s also less growth. When you’re at the ceiling, there’s no external pressure to raise your own floor. The feedback loops that sharpen judgment go quiet. Some engineers spend years there without noticing. They’re good. They’re comfortable. They stop getting better.

Advertisement

Both rooms carry risk. One threatens your confidence. The other threatens your trajectory.

Being the weakest engineer in a strong room is an advantage, but only if you treat it like one. It gives you a clear benchmark. But the room doesn’t do the work for you. You have to name the gaps, build a plan, and follow through.

And if you ever find yourself in the other room, where you’re clearly the strongest, pay attention to how long you’ve been there.

Both rooms are trying to tell you something.

Advertisement

—Brian

Not every engineer has a doctorate, but Ph.D. engineers are an essential part of the workforce, researching and designing tomorrow’s high-tech products and systems. In the United States, early signs are emerging that Ph.D. programs in electrical engineering and related fields may be shrinking. Political and economic uncertainty mean some universities are now seeing smaller applicant pools and graduate cohorts.

Read more here.

Last November, three professors at Auburn University in Ala. hosted a gathering at a coffee shop to confront students’ concerns about AI. The event, which they call an “AI Café,” was meant to create an environment “where scholars engage their communities in genuine dialogue about AI. Not to lecture about technical capabilities, but to listen, learn, and co-create a vision for AI that serves the public interest.” In a guest article, they share what they learned at the event and tips for starting your own AI Café.

Advertisement

Read more here.

Inference, the process of running a trained AI model on new data, is increasingly becoming a focus in the world of AI engineering. The growth of open LLMs means that more engineers can now tweak the models to perform better at inference. Given this trend, a recent issue of the Substack “The Pragmatic Engineer” does a deep dive on inference engineering—what it is, when it’s needed, and how to do it.

Read more here.

From Your Site Articles

Advertisement

Related Articles Around the Web

Source link

Continue Reading

Tech

The future of insurance is AI, so why the hesitation?

Published

on

InsTech.ie’s CEO Gary Leyden explores the opportunities that AI presents to the Irish insurance industry.

The Irish insurance industry is in the midst of a shift from talking about innovation to proving it in practice. Ireland hosts the European operations of many of the world’s largest insurers and technology multinationals, alongside a strong base of academic research and a regulatory environment that is internationally respected.

The challenge is not ambition or capability. It is execution. Ireland has the capital, the regulatory credibility and the operational footprint, yet too much innovation still stalls at pilot stage. Ideas are tested, discussed, extended and deferred, rather than deployed at scale. In a market facing rising climate volatility and cost pressure, delay is no longer neutral. It compounds risk.

The cost of that delay is already being felt. Climate-related losses are rising sharply, with global insured losses from natural catastrophes reaching approximately $137bn in 2024, according to Swiss Re research. In Ireland, repeated flooding events are accelerating risk faster than traditional planning, pricing and product development cycles can adapt. Delayed innovation is not simply cautious, it is expensive. It increases exposure to loss by leaving insurers reliant on slower processes, older models and systems that cannot be tested early or adapt quickly, driving higher volatility, slower response and greater downstream costs when shocks materialise.

Advertisement

This is where Ireland’s Digital Sandbox for Insurance comes into focus. Similar models exist internationally, including regulatory sandboxes operated by authorities such as the UK Financial Conduct Authority. Ireland’s Digital Sandbox, however, is not a regulatory sandbox. It is a market-led testing environment that allows firms to trial new systems safely before full-scale implementation – complementing, not replacing, regulatory supervision.

It removes the main excuse insurers use for not testing new technology, operational risk. It narrows the gap between technical promise and commercial adoption.

The friction lies between innovators and incumbents – start-ups lack access to real insurance environments, while insurers hesitate to test early-stage technology safely. The result has been extended discussion with limited deployment, a pattern that a structured testing environment is designed to change. This is often referred to as ‘proof-of-concept purgatory’, where innovation ambition hits a wall of big corporate conservatism.

Proof of delivery is already visible across Ireland’s insurtech ecosystem. Dimply is modernising customer engagement and distribution. Inaza is enhancing underwriting performance through advanced data and automation. Blink Parametric has deployed real-time parametric solutions across global markets. Docosoft is streamlining regulatory and operational workflows for major insurers. These firms show that Irish innovation can scale. A sandbox accelerates that trajectory, helping earlier-stage companies prove readiness faster and enabling insurers to move from evaluation to adoption with confidence.

Advertisement

Reaching ‘tier one’ insurers

This is not simply about adopting new tools, but about changing decision velocity by shortening procurement cycles and moving from evaluation to deployment. Without that shift, infrastructure alone will not move the dial. Insurance, however, carries additional structural friction, from legacy systems and data constraints to complex procurement and compliance obligations, which can slow adoption even when technology performs well.

Breaking into ‘tier one’ insurers remains the single biggest barrier to scaling Irish insurtech – not because the technology fails, but because procurement cycles, risk aversion and internal complexity slow decision-making to a crawl. Building infrastructure that unlocks that access would be transformative for the more than 100 Irish insurtechs in the national cluster, while also signalling internationally that Ireland is serious about competing in an industry undergoing profound change.

A tier-one insurer recently used the secure sandbox to test AI-powered fraud detection in conditions that mirrored real investigations but without exposing customer data. The company’s fraud investigations were distressingly slow, manual and resource-intensive, barely keeping up with rising claim volumes. Why does this matter? Because global insurance fraud costs exceed $25bn annually. Insurers know AI could help detect fraud better and faster.

The uncomfortable truth is that the problem has never been about the technology. The real issue has been organisations’ willingness to open their minds to the future and accept that the old ways simply cannot keep up. By using the sandbox environment, the insurer reduced its proof-of-concept phase from 12-18 months to just eight weeks. That’s the difference between dial-up internet and fibre broadband. If AI testing can happen in two months, executives should be asking why they’d want to still move at the pace of another era.

Advertisement

Ireland likes to describe itself as a global hub for innovation, but a hub is only a hub if decisions actually happen there. The reality is that Ireland already runs major operations for global insurers, handles complex regulatory engagement and holds deep expertise, yet it is still too often treated as the place where strategy is executed rather than where it is shaped.

If Ireland is trusted to run the systems, manage the risk and operate the infrastructure, then it should also be trusted to run the experimentation. The Digital Sandbox removes the usual excuses. The question now is whether all stakeholders – insurers, policymakers and industry – will act like leaders and move with the times.

 

By Gary Leyden

Advertisement

Gary Leyden is CEO of InsTech.ie, where he leads the development of Ireland’s national insurtech ecosystem, working across industry and start-ups to embed innovation in regulated sectors. He focuses on building the conditions for innovation to scale within insurance, leading initiatives such as Ireland’s Digital Sandbox for Insurance, supporting a nationwide network of over 120 insurtech companies developing AI-driven solutions.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

CoreWeave signs multi-year Anthropic deal as nine of ten top AI model providers join its platform

Published

on

In short: CoreWeave announced a multi-year agreement with Anthropic on April 10, 2026, giving the Claude maker access to Nvidia GPU capacity across US data centres for production-scale AI workloads. Financial terms were not disclosed. The deal arrives one day after CoreWeave announced a $21 billion expansion of its Meta partnership, and adds Anthropic to a customer roster that now covers nine of the ten leading AI model providers. CoreWeave generated $5.13 billion in revenue in 2025 and is guiding for more than $12 billion in 2026, backed by a contracted backlog exceeding $66 billion.

Ex-crypto miner becomes AI’s landlord

CoreWeave was founded in 2017 as Atlantic Crypto, an Ethereum mining operation that bought Nvidia graphics processing units in bulk to mine cryptocurrency and rent spare GPU capacity to other miners. When crypto margins compressed in 2019, the company renamed itself CoreWeave and pivoted to GPU-on-demand cloud services for general computing purposes. The timing proved transformative: the AI model training boom that began in earnest in 2023 turned CoreWeave’s stockpile of Nvidia hardware into one of the most strategically valuable infrastructure positions in technology. The company went public on Nasdaq under the ticker CRWV on March 28, 2025, at $40 per share, raising $1.5 billion and valuing it at approximately $23 billion. CoreWeave operates 32 data centres with more than 250,000 GPUs and 1.3 gigawatts of contracted power capacity. Its 2025 revenue of $5.13 billion represented a 168 per cent increase year-on-year, and management has guided for more than $12 billion in 2026 revenue against a contracted backlog that now exceeds $66 billion.

The company’s rapid growth has come with a significant concentration risk: Microsoft accounted for approximately 67 per cent of CoreWeave’s 2025 revenue, a dependence that investors and analysts flagged in the run-up to the IPO. Microsoft’s push to develop its own AI models adds a further strategic variable, raising the question of how much of Microsoft’s compute demand will eventually shift toward in-house infrastructure rather than third-party GPU cloud rental. The Anthropic deal, arriving the day after a $21 billion Meta expansion, represents CoreWeave’s most visible effort to build a diversified customer base that reduces its dependence on any single hyperscaler.

What Anthropic is paying for

Anthropic’s compute strategy has grown more complex alongside its revenue. The company’s annualised revenue run rate surpassed $30 billion in early April 2026, more than three times the $9 billion figure it recorded at the end of 2025. That rate of acceleration, driven by enterprise Claude adoption and the breakout growth of Claude Code, has required Anthropic to expand its infrastructure commitments across multiple chip architectures simultaneously. Its primary training workloads run on Amazon Web Services Trainium hardware via Project Rainier, a large-scale cluster spanning hundreds of thousands of AI chips across multiple US data centres. Three days before the CoreWeave announcement, Anthropic’s deal with Google and Broadcom for multi-gigawatt TPU capacity secured access to approximately 3.5 gigawatts of next-generation tensor processing unit compute expected to come online in 2027. The CoreWeave deal fills a third lane: Nvidia GPU capacity for production inference workloads, running at the scale and latency performance that enterprise Claude deployments require. Anthropic’s $100 million commitment to its Claude partner network earlier this year signalled the company’s intent to expand the ecosystem of developers and enterprises building on Claude, and that ecosystem expansion is now directly driving the compute procurement decisions behind deals like this one.

Advertisement

CoreWeave co-founder and CEO Michael Intrator framed the deal in terms that go beyond raw infrastructure capacity. “AI is no longer just about infrastructure, it’s about the platforms that turn models into real-world impact,” he said. “We’re excited to work with Anthropic at the centre of where models are put to work and performance in production shows up. It’s exactly the kind of real-world deployment of AI that CoreWeave was built for.” Anthropic will initially deploy compute under a phased infrastructure rollout, with the option to expand the arrangement over time. The specific Nvidia chip architectures involved have not been publicly disclosed, though CoreWeave’s estate spans current and next-generation Nvidia GPU generations. Nvidia’s Vera Rubin GPUs, unveiled at GTC 2026, represent the next major architecture in CoreWeave’s deployment roadmap, with volume shipments expected in the second half of 2026.

Advertisement

Nine of ten, two deals in 48 hours

The Anthropic agreement means that nine of the ten leading AI model providers now use CoreWeave’s platform, a market penetration figure the company cited in its press release. The customer roster built alongside Microsoft includes Meta, OpenAI, Mistral, Cohere, IBM, and Nvidia itself, as well as a sub-leasing arrangement through which Microsoft supplies some CoreWeave capacity to third-party clients. The Meta relationship deepened significantly on April 9, 2026, one day before the Anthropic announcement: Meta committed an additional $21 billion to CoreWeave for dedicated AI cloud capacity running from 2027 through December 2032, bringing the total value of the two companies’ infrastructure relationship to approximately $35 billion. CoreWeave also expanded its agreement with OpenAI by up to $6.5 billion earlier in 2026. The two announcements in 48 hours, covering Meta and Anthropic, illustrate how CoreWeave is converting its infrastructure position into long-duration contracted revenue rather than spot-market GPU rentals. CoreWeave raised $8.5 billion in a GPU-backed debt facility in March 2026, with the Meta relationship used as collateral. The Anthropic deal, while undisclosed in value, will contribute to a backlog that analysts are watching as the primary indicator of the company’s long-term revenue predictability.

The infrastructure tells a story about dependence

The same day CoreWeave announced the Anthropic agreement, reports emerged that Anthropic is exploring the design of its own custom AI chips — a move that would, if realised, eventually reduce its dependence on the Nvidia-powered infrastructure that CoreWeave provides. The irony is deliberate: Anthropic’s current infrastructure commitments across AWS, Google Cloud, and now CoreWeave reflect a company that is simultaneously expanding compute dependency in the short term and exploring the routes to architectural independence in the long term. That tension is not unique to Anthropic. Meta, OpenAI, and Google have all invested heavily in custom silicon programmes while continuing to rent third-party Nvidia capacity, because the timelines for custom chip maturity and the demand curve for AI compute do not align closely enough to allow a clean transition. CoreWeave’s position as the GPU landlord of choice for the AI industry is therefore both a statement about the current moment and a structural bet that Nvidia-native cloud capacity will remain competitively necessary for at least the duration of the contracts now being signed. As AI infrastructure spending accelerated through 2025, the GPU cloud market began to look less like a transitional gap-filler and more like a permanent layer of the AI stack, and CoreWeave, two deals in two days, is the clearest evidence of that shift.

Source link

Advertisement
Continue Reading

Tech

Keychron shares 3D keyboard blueprints on GitHub, opening hardware to modders

Published

on


Keychron’s devices have long supported the open-source QMK and VIA firmware platforms, allowing users to customize firmware behavior. However, the addition of editable hardware files takes that openness a step further.
Read Entire Article
Source link

Continue Reading

Trending

Copyright © 2025