Connect with us

Tech

How businesses can unlock the true value of modern log management

Published

on

Without logs, it would be almost impossible to keep modern applications, cloud platforms, or customer-facing services running efficiently. Some might argue that logs are one of the most critical but least celebrated sources of truth in the digital era.

At its core, log management is about turning raw system logs — unprocessed, detailed records of a system’s activities, including server actions, user interactions, and error messages — into actionable insights.

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Payments platform BridgePay confirms ransomware attack behind outage

Published

on

credit card point-of-sale system

A major U.S. payment gateway and solutions provider says a ransomware attack has knocked key systems offline, triggering a widespread outage affecting multiple services.

The incident began on Friday and quickly escalated into a nationwide disruption across BridgePay’s platform.

Ransomware confirmed within hours of outage

BridgePay Network Solutions confirmed late Friday that the incident disrupting its payment gateway was caused by ransomware.

Wiz

In an update posted Feb. 6, the company said it has engaged federal law enforcement, including the FBI and U.S. Secret Service, along with external forensic and recovery teams.

“Initial forensic findings indicate that no payment card data has been compromised,” the company said, adding that any accessed files were encrypted and that there is currently “no evidence of usable data exposure.”

Advertisement

BleepingComputer has contacted BridgePay with questions about the ransomware group involved, which BridgePay has not yet named.

Merchants report cash-only payments

Around the same time BridgePay disclosed the incident, some U.S. merchants and organizations began telling customers they could only accept cash due to a nationwide card-processing outage.

One restaurant said its “credit card processing company had a cyber security breach” and that card payments were unavailable nationwide.

Restaurant says it can only take cash payments during a POS outage
Restaurant says it can only take cash during a point-of-sale outage

City of Palm Bay, Florida government announced:

“BridgePay Network Solutions, our third-party credit card processing vendor, is experiencing a nationwide service disruption. As a result, the City’s online billing payment portal is currently unavailable. We do not have an estimated restoration time.”

Advertisement

As such, the city government suggests that customers may make utility payments by cash, card, or check by appearing in person or, in limited cases, by calling the office.

Other organizations, including Lightspeed Commerce, ThriftTrac, and City of Frisco, Texas have reported service impacts from the BridgePay incident. 

Payment gateway services hit hard

BridgePay’s status page showed major outages across core production systems, including:

  • BridgePay Gateway API (BridgeComm)
  • PayGuardian Cloud API
  • MyBridgePay virtual terminal and reporting
  • Hosted payment pages
  • PathwayLink gateway and boarding portals

Early warning signs appeared around 3:29 a.m., when monitoring detected degraded performance across multiple services, beginning with the “Gateway.Itstgate.com – virtual terminal, reporting, API” systems.

The intermittent service degradation eventually cascaded into a full system outage.

Advertisement

Within hours, the company disclosed the incident was cybersecurity-related and later confirmed it was ransomware.

The breadth of affected systems suggests widespread disruption for merchants and payment integrators relying on the platform for card processing.

As of the latest update, BridgePay said recovery could take time and is being handled “in a secure and responsible manner,” while the company continues its forensic investigation.

The incident adds to a growing wave of ransomware attacks targeting payment infrastructure, where outages can quickly ripple through real-world commerce when transaction pipelines go down.

Advertisement

Modern IT infrastructure moves faster than manual workflows can handle.

In this new Tines guide, learn how your team can reduce hidden manual delays, improve reliability through automated response, and build and scale intelligent workflows on top of tools you already use.

Source link

Continue Reading

Tech

T-Mobile layoffs: Telecom giant cuts 393 jobs across Washington state, including VP roles

Published

on

(Photo by Mika Baumeister on Unsplash)

T-Mobile is laying off 393 workers in Washington as part of a new round of cuts, according to a filing with the state Employment Security Department released Monday morning.

More than 200 different job titles are impacted, according to the filing, including analysts, engineers and technicians, as well as directors and managers.

The cuts targeted nearly 210 senior- and director-level employees, plus seven employees with vice president or senior vice president titles. They include a senior VP of talent and four VP of legal affairs roles.

Affected employees worked at the company’s Bellevue headquarters; data centers in Bellevue and East Wenatchee; and at stores and other facilities in Bothell, Bellingham, Woodinville, Spokane Valley and elsewhere.

“As the next step in our evolution, we’re making some changes while continuing to hire to ensure we have the right focus, structure and momentum to keep changing the industry through innovation and a long-standing focus on customers,” the company said in an emailed statement. It added that it was responding to “a dynamic market.”

Advertisement

Affected employees were given 60-days’ notice and the departures are expected to take effect April 2.

A WARN filing submitted to the state and signed by Monica Frohock, senior director of the Magenta Service Center, attributed the cuts to “changing business needs.”

“These facilities are not being closed,” the notice stated. “The layoffs are not due to relocation or contracting out employer operations or employee positions, but it is possible that some work currently done by these employees may at some point be done by others.”

T-Mobile employed about 70,000 people as of Dec. 31, 2024. The company has nearly 8,000 workers in the Seattle region, according to LinkedIn.

Advertisement

The cuts come as the Seattle area is being hit by thousands of tech-related layoffs, including job losses at Amazon, Expedia, Meta, Zillow and other companies.

T-Mobile, the largest U.S. telecom company by market capitalization, laid off 121 workers in August 2025. In November, former Chief Operating Officer Srini Gopalan replaced longtime leader Mike Sievert as CEO.

T-Mobile’s stock is down nearly 20% over the past 12 months. The company reported revenue of $18.2 billion in the third quarter, up 9% year-over-year, and added 1 million new postpaid phone customers.

Verizon, another telecom giant, laid off approximately 165 employees in Washington in November.

Advertisement

Editor’s note: Story updated to include emailed comments from T-Mobile.

Source link

Continue Reading

Tech

Teaching Machines to Spot Human Errors in Math Assignments

Published

on

When completing math problems, students often have to show their work. It’s a method teachers use to catch errors in thinking, to make sure students are grasping mathematical concepts correctly.

New AI projects in development aim to automate that process. The idea is to train machines to catch and predict the errors students make when studying math, to better enable teachers to correct student misconceptions in real time.

For the first time ever, developers can now build fascinating algorithms into products that will help teachers without requiring them to understand machine learning, says Sarah Johnson, CEO at Teaching Lab, which provides professional development to teachers.

Some of these efforts trace back to the U.K.-based edtech platform Eedi Labs, which has held a series of coding competitions since 2020 intended to explore ways to use AI to boost math performance. The latest was held earlier this year, and it tried to use AI to capture misconceptions from multiple choice questions and accompanying student explanations. It relied upon Eedi Labs’ data but was run by The Learning Agency, an education consultancy firm in the U.S. A joint project with Vanderbilt University — and using Kaggle, a data science platform — the competition received support from the Gates Foundation and the Walton Family Foundation, and coding teams competed for $55,000 in awards.

Advertisement

The latest competition achieved “impressive” accuracy in predicting student misconceptions in math, according to Eedi Labs.

Researchers and edtech developers hope this kind of breakthrough can help bring useful AI applications into math classrooms — which have lagged behind in AI adoption, even as English instructors have had to rethink their writing assignments to account for student AI use. Some people have argued that, so far, there has been a conceptual problem with “mathbots.”

Perhaps training algorithms to identify common student math misconceptions could lead to the development of sophisticated tools to help teachers target instruction.

But is that enough to improve students’ declining math scores?

Advertisement

Solving the (Math) Problem

So far, the deluge of money pouring into artificial intelligence is unrelenting. Despite fears that the economy is in an “AI bubble”, edtech leaders hope that smart, research-backed uses of the technology will deliver gains for students.

In the early days of generative AI, people thought you could get good results by just hooking up an education platform to a large language model, says Johnson, of Teaching Lab. All these chatbot wrappers popped up, promising that teachers could create the best lesson plans using ChatGPT in their learning management systems.

But that’s not true, she says. You need to focus on applications of the technology that are trained on education-specific data to actually help classroom teachers, she adds.

That’s where Eedi Labs is trying to make a difference.

Advertisement

Currently, Eedi Labs sells an AI tutoring service for math. The model, which the company calls “human in the loop,” has human tutors check messages automatically generated by its platform before they are sent to students, and make edits when necessary.

Plus, through efforts like its recent competition, leaders of the platform think they can train machines to catch and predict the errors students make when studying math, further expediting learning.

But training machine learning algorithms to identify common math misconceptions a student holds isn’t all that easy.

Cutting Edge?

Whether these attempts to use AI to map student misconceptions prove useful depends on what computer scientists call “ground truth,” the quality of the data used to train the algorithms in the first place. That means it depends on the quality of the multiple choice math problem questions, and also of the misconceptions that those questions reveal, says Jim Malamut, a postdoctoral researcher at Stanford Graduate School of Education. Malamut is not affiliated with Eedi Labs or with The Learning Agency’s competition.

Advertisement

The approach in the latest competition is not groundbreaking, he argues.

The dataset used in this year’s misconceptions contest had teams sorting through student answers from multiple choice questions with brief rationales from students. For the company, it’s an advancement, since previous versions of the technology relied on multiple choice questions alone.

Still, Malamut describes the use of multiple choice questions as “curious” because he believes the competition chose to work with a “simplistic format” when the tools they are testing are better-suited to discern patterns in more complex and open-ended answers from students. That is, after all, an advantage of large language models, Malamut says. In education, psychometricians and other researchers relied on multiple choice questions for a long time because they are easier to scale, but with AI that shouldn’t be as much of a barrier, Malamut argues.

Pushed by declining U.S. scores on international assessments, in the last decade-plus the country has shifted toward “Next-Generation Assessments” which aim to test conceptual skills. It’s part of a larger shift by researchers to the idea of “assessment for learning,” which holds that assessment tools place emphasis on getting information that’s useful for teaching rather than what’s convenient for researchers to measure, according to Malamut.

Advertisement

Yet the competition relies on questions that clearly predate that trend, Malamut says, in a way that might not meet the moment

For example, some questions asked students to figure out which decimal was the largest, which sheds very little light on conceptual understanding. Instead, current research suggests that it’s better to have students write a decimal number using base 10 blocks or to point to missing decimals on a marked number line. Historically, these sorts of questions couldn’t be used in a large-scale assessment because they are too open-ended, Malamut says. But applying AI to current thinking around education research is precisely where AI could add the most value, Malamut adds.

But for the company developing these technologies, “holistic solutions” are important.

Eedi Labs blends multiple choice questions, adaptive assessments and open responses for a comprehensive diagnosis, says cofounder Simon Woodhead. This latest competition was the first to incorporate student responses, enabling deeper analysis, he adds.

Advertisement

But there’s a trade-off between the time it takes to give students these assessments and the insights they give teachers, Woodhead says. So the Eedi team thinks that a system that uses multiple choice questions is useful for scanning student comprehension inside a classroom. With just a device at the front of the class, a teacher can home in on misconceptions quickly, Woodhead says. Student explanations and adaptive assessments, in contrast, help with deeper analysis of misconceptions. Blending these gives teachers the most benefit, Woodhead argues. And the success of this latest competition convinced the company to further explore using student responses, Woodhead adds.

Still, some think the questions used in the competition were not fine-tuned enough.

Woodhead notes that the competition relied on broader definitions of what counts as a “misconception” than Eedi Labs usually does. Nonetheless, the company was impressed by the accuracy of the AI predictions in the competition, he says.

Others are less sure that it really captures student misunderstandings.

Advertisement

Education researchers now know a lot more about the kinds of questions that can get to the core of student thinking and reveal misconceptions that students may have than they used to, Malamut says. But many of the questions in the contest’s dataset don’t accomplish this well, he says. Even though the questions included multiple choice options and short answers, it could have used better-formed questions, Malamut thinks. There are ways to ask the questions that can bring out student ideas. Rather than asking students to answer a question about fractions, you could ask students to critique others’ reasoning processes. For example: “Jim added these fractions in this way, showing his work like this. Do you agree with him? Why or why not? Where did he make a mistake?”

Whether it’s found its final form, there is growing interest in these attempts to use AI, and that comes with money for exploring new tools.

From Computer Back to Human

The Trump administration is betting big on AI as a strategy for education, making federal dollars available. Some education researchers are enthusiastic, too, boosted by $26 million in funding from Digital Promise intended to help narrow the distance between best practices in education and AI.

These approaches are early, and the tools still need to be built and tested. Nevertheless, some argue it’s already paying off.

Advertisement

A randomized controlled trial conducted by Eedi Labs and Google DeepMind found that math tutoring that incorporated Eedi’s AI platform boosted student learning in 11- and 12-year-olds in the U.K. The study focused on the company’s “human in the loop” approach — using human-supervised AI tutoring — currently used in some classrooms. Within the U.S., the platform is used by 4,955 students across 39 K-12 schools, colleges and tutoring networks. Eedi Labs says it is conducting another randomized controlled trial in 2026 with Imagine Learning in the U.S.

Others have embraced a similar approach. For example, Teaching Lab is actively involved in work about AI for use in classrooms, with Johnson telling EdSurge that they are testing a model also based on data borrowed from Eedi and a company called Anet. That data model project is currently being tested with students, according to Johnson.

Several of these efforts require sharing tech insights and data. That runs counter to many companies’ typical practices for protecting intellectual property, according to the Eedi Labs CEO. But he thinks the practice will pay off. “We are very keen to be at the cutting edge, that means engaging with researchers, and we see sharing some data as a really great way to do this,” he wrote in an email.

Still, once the algorithms are trained, everyone seems to agree turning it into success in classrooms is another challenge.

Advertisement

What might that look like?

The data infrastructure can be built into products that let teachers modify curriculum based on the context of the classroom, Johnson says. If you can connect the infrastructure to student data and allow it to make inferences, it could provide teachers with useful advice, she adds.

Meg Benner, managing director of The Learning Agency, the organization that ran the misconceptions contest, suggests that this could be used to feed teachers information about which misconceptions their students are making, or to even trigger a chatbot-style lesson helping them to overcome those misconceptions.

It’s an interesting research project, says Johnson, of Teaching Lab. But once this model is fully built, it will still need to be tested to see if refined diagnosis actually leads to better interventions in front of teachers and students, she adds.

Advertisement

Some are skeptical that the ways companies will turn these into products may not enhance learning all that much. After all, having a chatbot-style tutoring system conclude that students are conducting additive reasoning when multiplicative reasoning is required may not transform math instruction. Indeed, some research has shown that students don’t respond well to chatbots. For instance, the famous 5 percent problem revealed that only the top students usually see results from most digital math programs. Instead, teachers have to handle misconceptions as they come up, some argue. That means students having an experience or conversation that exposes the limits of old ideas and the power of clear thinking. The challenge, then, is figuring out how to get the insights from the computer and machine analysis back out to the students.

But others think that the moment is exciting, even if there’s some hype.

“I’m cautiously optimistic,” says Malamut, the postdoctoral student at Stanford. Formative assessments and diagnostic tools exist now, but they are not automated, he says. True, the assessment data that’s easy to collect isn’t always the most helpful to teachers. But if used correctly, AI tools could possibly close that gap.

Source link

Advertisement
Continue Reading

Tech

Atomically Thin Materials Significantly Shrink Qubits

Published

on

Quantum computing is a devilishly complex technology, with many technical hurdles impacting its development. Of these challenges two critical issues stand out: miniaturization and qubit quality.

IBM has adopted the superconducting qubit road map of reaching a 1,121-qubit processor by 2023, leading to the expectation that 1,000 qubits with today’s qubit form factor is feasible. However, current approaches will require very large chips (50 millimeters on a side, or larger) at the scale of small wafers, or the use of chiplets on multichip modules. While this approach will work, the aim is to attain a better path toward scalability.

Now researchers at MIT have been able to both reduce the size of the qubits and done so in a way that reduces the interference that occurs between neighboring qubits. The MIT researchers have increased the number of superconducting qubits that can be added onto a device by a factor of 100.

“We are addressing both qubit miniaturization and quality,” said William Oliver, the director for the Center for Quantum Engineering at MIT. “Unlike conventional transistor scaling, where only the number really matters, for qubits, large numbers are not sufficient, they must also be high-performance. Sacrificing performance for qubit number is not a useful trade in quantum computing. They must go hand in hand.”

Advertisement

The key to this big increase in qubit density and reduction of interference comes down to the use of two-dimensional materials, in particular the 2D insulator hexagonal boron nitride (hBN). The MIT researchers demonstrated that a few atomic monolayers of hBN can be stacked to form the insulator in the capacitors of a superconducting qubit.

Just like other capacitors, the capacitors in these superconducting circuits take the form of a sandwich in which an insulator material is sandwiched between two metal plates. The big difference for these capacitors is that the superconducting circuits can operate only at extremely low temperatures—less than 0.02 degrees above absolute zero (-273.15 °C).

Golden dilution refrigerator hanging verticallySuperconducting qubits are measured at temperatures as low as 20 millikelvin in a dilution refrigerator.Nathan Fiske/MIT

In that environment, insulating materials that are available for the job, such as PE-CVD silicon oxide or silicon nitride, have quite a few defects that are too lossy for quantum computing applications. To get around these material shortcomings, most superconducting circuits use what are called coplanar capacitors. In these capacitors, the plates are positioned laterally to one another, rather than on top of one another.

As a result, the intrinsic silicon substrate below the plates and to a smaller degree the vacuum above the plates serve as the capacitor dielectric. Intrinsic silicon is chemically pure and therefore has few defects, and the large size dilutes the electric field at the plate interfaces, all of which leads to a low-loss capacitor. The lateral size of each plate in this open-face design ends up being quite large (typically 100 by 100 micrometers) in order to achieve the required capacitance.

Advertisement

In an effort to move away from the large lateral configuration, the MIT researchers embarked on a search for an insulator that has very few defects and is compatible with superconducting capacitor plates.

“We chose to study hBN because it is the most widely used insulator in 2D material research due to its cleanliness and chemical inertness,” said colead author Joel Wang, a research scientist in the Engineering Quantum Systems group of the MIT Research Laboratory for Electronics.

On either side of the hBN, the MIT researchers used the 2D superconducting material, niobium diselenide. One of the trickiest aspects of fabricating the capacitors was working with the niobium diselenide, which oxidizes in seconds when exposed to air, according to Wang. This necessitates that the assembly of the capacitor occur in a glove box filled with argon gas.

While this would seemingly complicate the scaling up of the production of these capacitors, Wang doesn’t regard this as a limiting factor.

Advertisement

“What determines the quality factor of the capacitor are the two interfaces between the two materials,” said Wang. “Once the sandwich is made, the two interfaces are “sealed” and we don’t see any noticeable degradation over time when exposed to the atmosphere.”

This lack of degradation is because around 90 percent of the electric field is contained within the sandwich structure, so the oxidation of the outer surface of the niobium diselenide does not play a significant role anymore. This ultimately makes the capacitor footprint much smaller, and it accounts for the reduction in cross talk between the neighboring qubits.

“The main challenge for scaling up the fabrication will be the wafer-scale growth of hBN and 2D superconductors like [niobium diselenide], and how one can do wafer-scale stacking of these films,” added Wang.

Wang believes that this research has shown 2D hBN to be a good insulator candidate for superconducting qubits. He says that the groundwork the MIT team has done will serve as a road map for using other hybrid 2D materials to build superconducting circuits.

Advertisement

Source link

Continue Reading

Tech

How AI Will Change Chip Design

Published

on

The end of Moore’s Law is looming. Engineers and designers can do only so much to miniaturize transistors and pack as many of them as possible into chips. So they’re turning to other approaches to chip design, incorporating technologies like AI into the process.

Samsung, for instance, is adding AI to its memory chips to enable processing in memory, thereby saving energy and speeding up machine learning. Speaking of speed, Google’s TPU V4 AI chip has doubled its processing power compared with that of its previous version.

But AI holds still more promise and potential for the semiconductor industry. To better understand how AI is set to revolutionize chip design, we spoke with Heather Gorr, senior product manager for MathWorksMATLAB platform.

How is AI currently being used to design the next generation of chips?

Advertisement

Heather Gorr: AI is such an important technology because it’s involved in most parts of the cycle, including the design and manufacturing process. There’s a lot of important applications here, even in the general process engineering where we want to optimize things. I think defect detection is a big one at all phases of the process, especially in manufacturing. But even thinking ahead in the design process, [AI now plays a significant role] when you’re designing the light and the sensors and all the different components. There’s a lot of anomaly detection and fault mitigation that you really want to consider.

Portrait of a woman with blonde-red hair smiling at the cameraHeather GorrMathWorks

Then, thinking about the logistical modeling that you see in any industry, there is always planned downtime that you want to mitigate; but you also end up having unplanned downtime. So, looking back at that historical data of when you’ve had those moments where maybe it took a bit longer than expected to manufacture something, you can take a look at all of that data and use AI to try to identify the proximate cause or to see something that might jump out even in the processing and design phases. We think of AI oftentimes as a predictive tool, or as a robot doing something, but a lot of times you get a lot of insight from the data through AI.

What are the benefits of using AI for chip design?

Gorr: Historically, we’ve seen a lot of physics-based modeling, which is a very intensive process. We want to do a reduced order model, where instead of solving such a computationally expensive and extensive model, we can do something a little cheaper. You could create a surrogate model, so to speak, of that physics-based model, use the data, and then do your parameter sweeps, your optimizations, your Monte Carlo simulations using the surrogate model. That takes a lot less time computationally than solving the physics-based equations directly. So, we’re seeing that benefit in many ways, including the efficiency and economy that are the results of iterating quickly on the experiments and the simulations that will really help in the design.

Advertisement

So it’s like having a digital twin in a sense?

Gorr: Exactly. That’s pretty much what people are doing, where you have the physical system model and the experimental data. Then, in conjunction, you have this other model that you could tweak and tune and try different parameters and experiments that let sweep through all of those different situations and come up with a better design in the end.

So, it’s going to be more efficient and, as you said, cheaper?

Gorr: Yeah, definitely. Especially in the experimentation and design phases, where you’re trying different things. That’s obviously going to yield dramatic cost savings if you’re actually manufacturing and producing [the chips]. You want to simulate, test, experiment as much as possible without making something using the actual process engineering.

Advertisement

We’ve talked about the benefits. How about the drawbacks?

Gorr: The [AI-based experimental models] tend to not be as accurate as physics-based models. Of course, that’s why you do many simulations and parameter sweeps. But that’s also the benefit of having that digital twin, where you can keep that in mind—it’s not going to be as accurate as that precise model that we’ve developed over the years.

Both chip design and manufacturing are system intensive; you have to consider every little part. And that can be really challenging. It’s a case where you might have models to predict something and different parts of it, but you still need to bring it all together.

One of the other things to think about too is that you need the data to build the models. You have to incorporate data from all sorts of different sensors and different sorts of teams, and so that heightens the challenge.

Advertisement

How can engineers use AI to better prepare and extract insights from hardware or sensor data?

Gorr: We always think about using AI to predict something or do some robot task, but you can use AI to come up with patterns and pick out things you might not have noticed before on your own. People will use AI when they have high-frequency data coming from many different sensors, and a lot of times it’s useful to explore the frequency domain and things like data synchronization or resampling. Those can be really challenging if you’re not sure where to start.

One of the things I would say is, use the tools that are available. There’s a vast community of people working on these things, and you can find lots of examples [of applications and techniques] on GitHub or MATLAB Central, where people have shared nice examples, even little apps they’ve created. I think many of us are buried in data and just not sure what to do with it, so definitely take advantage of what’s already out there in the community. You can explore and see what makes sense to you, and bring in that balance of domain knowledge and the insight you get from the tools and AI.

What should engineers and designers consider when using AI for chip design?

Advertisement

Gorr: Think through what problems you’re trying to solve or what insights you might hope to find, and try to be clear about that. Consider all of the different components, and document and test each of those different parts. Consider all of the people involved, and explain and hand off in a way that is sensible for the whole team.

How do you think AI will affect chip designers’ jobs?

Gorr: It’s going to free up a lot of human capital for more advanced tasks. We can use AI to reduce waste, to optimize the materials, to optimize the design, but then you still have that human involved whenever it comes to decision-making. I think it’s a great example of people and technology working hand in hand. It’s also an industry where all people involved—even on the manufacturing floor—need to have some level of understanding of what’s happening, so this is a great industry for advancing AI because of how we test things and how we think about them before we put them on the chip.

How do you envision the future of AI and chip design?

Advertisement

Gorr: It’s very much dependent on that human element—involving people in the process and having that interpretable model. We can do many things with the mathematical minutiae of modeling, but it comes down to how people are using it, how everybody in the process is understanding and applying it. Communication and involvement of people of all skill levels in the process are going to be really important. We’re going to see less of those superprecise predictions and more transparency of information, sharing, and that digital twin—not only using AI but also using our human knowledge and all of the work that many people have done over the years.

From Your Site Articles

Related Articles Around the Web

Source link

Advertisement
Continue Reading

Tech

Nvidia rival Cerebras raises $1bn at $23bn valuation

Published

on

Cerebras raised $1.1bn in a previous round last September at an $8.1bn post-money valuation.

Cerebras Systems, the AI chipmaker aiming to rival Nvidia, has raised $1bn in a Series H round led by Tiger Global with participation from AMD. The raise values the company at around $23bn, nearly triple the valuation made a little over four months ago.

Other backers in this round include Benchmark; Fidelity Management & Research Company; Atreides Management; Alpha Wave Global; Altimeter; Coatue; and 1789 Capital, among others.

The new round comes after Cerebras raised $1.1bn last September at an $8.1bn post-money valuation backed by several of the same investors.

Advertisement

Just days later, the company withdrew from a planned initial public offering (IPO) without providing an official reason. At the time of the IPO filing in 2024, there was criticism around its heavy reliance on a single United Arab Emirates-based customer, the Microsoft-backed G42.

Cerebras still intends to go IPO as soon as possible, it said.

The recent raise better positions the company to compete with global AI chip leader Nvidia. Cerebras claims that it builds the “fastest AI infrastructure in the world” and company CEO Andrew Feldman has also gone on record to say that his hardware runs AI models multiple times faster than that of Nvidia’s.

Cerebras is behind WSE-3, touted to be the “largest” AI chip ever built, with 19-times more transistors and 28-times more compute that the Nvidia B200, according to the company.

Advertisement

The company has a close connection with OpenAI, according to statements made by both Feldman and OpenAI chief Sam Altman – who happens to be an early investor in the chipmaker. Last month, the two announced a partnership to deploy 750MW of Cerebras’s wafer-scale systems to make OpenAI’s chatbots faster.

OpenAI – a voracious user of Nvidia’s AI technology – has been in search of alternatives,  although that’s not to say that OpenAI is backing down from using Nvidia technology in the future.

Last year, OpenAI drew up a 6GW agreement with AMD to power its AI infrastructure. The first 1GW deployment of AMD Instinct MI450 GPUs is set to begin in the second half of 2026.

At the time of the announcement, Altman said that the deal was “incremental” to OpenAI’s work with Nvidia. “We plan to increase our Nvidia purchasing over time”, he added.

Advertisement

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

Medtronic and University of Galway open device prototype hub

Published

on

The facility is part of a five-year, €5m signature innovation partnership between Medtronic and the university.

US and Irish medical device company Medtronic and the University of Galway have launched their Medical Device Prototype Hub, a specialist facility designed to support the medtech ecosystem, STEM engagement and research.

Development of the hub, which belongs to the university’s new Technology Services Directorate, is part of a five-year, €5m signature innovation partnership between Medtronic and the university. 

Professor David Burn, the president of the university, said: “The launch of the Medical Device Prototype Hub at University of Galway marks a hugely significant milestone in our signature partnership with Medtronic, but it also sends a strong message to all those in the sector and all those who are driving innovation.

Advertisement

“University of Galway is creating the ecosystem in which our partners in research and innovation can thrive. We look forward to celebrating the breakthroughs and successes that this initiative enables.”

The Medical Device Prototype Hub forms part of the Institute for Health Discovery and Innovation, which was established at the university in 2024.

It will be further supported via collaborations with government agencies and industry leaders, aiming to create a collaborative environment that promotes innovation and regional growth in life sciences and medical technologies. 

The university said that the hub has a range of expert staff to facilitate concept creation, development and manufacturing of innovative medical device prototypes.

Advertisement

It offers a suite of services to support early-stage medical device innovation – for example, virtual and physical prototyping – that enables rapid design iteration through computer aided design, modelling and simulation.  

“The Technology Services Directorate brings together key research facilities that support fundamental research at University of Galway,” said Aoife Duffy, the head of the directorate. 

“It aims to advance our research excellence by bringing together state-of-the-art core facilities and making strategic decisions on infrastructure and investment. The new prototype hub significantly enhances the innovation pathway available for the university research community and wider, and we look forward to working with Medtronic on this partnership.” 

Ronan Rogers, senior R&D director at Medtronic, added: “Today’s launch of the Medical Device Prototype Hub represents an exciting next step in our long‑standing partnership with University of Galway. Medtronic has deep roots in the west of Ireland, and this facility strengthens a shared commitment to advancing research, accelerating innovation and developing the next generation of medical technologies. 

Advertisement

“We are proud to invest in an ecosystem that not only drives technological progress but also supports talent development. This hub will unlock new avenues for discovery and accelerate the path from promising ideas to real‑world medical solutions for patients.”

Just last week (27 January), two University of Galway projects won proof-of-concept grants from the European Research Council. One of the winning Galway projects is called Concept-AM and is being led by Prof Ted Vaughan, who is also involved with the new hub.

Concept-AM aims to advance software that enables engineers to design lighter, stronger and more efficient components optimised for 3D printing across biomedical, automotive and aerospace applications, creating complex and lightweight parts with less material waste.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Tech

Shokz OpenFit Pro, Nex Playground, Sony A7 V and more

Published

on

We’re starting to hit our stride in 2026. Now that February is here, our reviews team is flush with new devices to test, which means you’ve got a lot to catch up on if you haven’t been following along. Read on for a roundup of the most compelling new gear we’ve tested recently from gaming, PCs, cameras and more.

Nex Playground

Image for the large product module

Nex

The Nex Playground brings motion-tracked games to the entire family. Consider it the best of the Xbox Kinect in a tiny box.

Pros
Advertisement
  • Fun core titles
  • Solid motion-tracking
  • Well-designed hardware and UI
  • Large library of games
  • Works offline
Cons
  • Requires an ongoing subscription to access most games
  • Needs large open space for play

If you still have a fondness for the Xbox Kinect, the Nex Playground might be right up your alley. Senior reporter Devindra Hardawar recently put the tiny box through its paces and found an active gaming experience that’s fun for the whole family. “While I have some concerns about the company’s subscription model, Nex has accomplished a rare feat: It developed a simple box that makes it easy for your entire family to jump into genuinely innovative games and experiences,” he wrote.

MSI’s Prestige 14 Flip AI+

Image for the large product module

MSI

MSI’s Prestige 14 Flip AI+ is a remarkably powerful ultraportable, thanks to Intel’s Panther Lake chips. But it’s held back by a clunky trackpad and weak keyboard.

Pros
  • Excellent CPU performance
  • Solid gaming support
  • Bold OLED screen
  • Tons of ports
  • Relatively affordable
Cons
  • Awful mechanical trackpad
  • Dull-feeling keyboard
  • Display is limited to 60Hz

Devindra also tested MSI’s latest laptop, the powerful Prestige 14 Flip AI+. While the machine got high marks for its performance, display and connectivity, he noted that the overall experience is hindered by subpar keyboard and truly awful trackpad. “As one of the earliest Panther Lake laptops on the market, the $1,299 Prestige 14 Flip AI+ is a solid machine, if you’re willing to overlook its touchpad flaws,” he explained. “More than anything though, the Prestige 14 makes me excited to see what other PC makers offer with Intel’s new chips.”

Shokz OpenFit Pro

Image for the large product module

Shokz/Engadget

Advertisement

Finally, a set of open earbuds that actually sound good and provide noticeable ambient noise reduction.

Pros
  • Effective noise reduction
  • Comfy fit
  • Great sound for open earbuds
  • Dolby Atmos support
Cons
  • Sound quality varies with ear shape
  • Over ear hook isn’t for everyone
  • Noise reduction isn’t as effective as ANC

Fresh off of its Best of CES selection, I conducted a full review of the OpenFit Pro earbuds from Shokz. I continue to be impressed by the earbuds’ ability to reduce ambient noise while keeping your ears open. And the overall sound quality is excellent for a product that sits outside of your ears.

Sony A7 V

Image for the large product module

Sony/Engadget

With a new partially-stacked 33MP sensor, Sony’s A7 V offers speed, autofocus accuracy and the best image quality in its class.

Advertisement
Pros
  • Fast shooting speeds
  • Quick and accurate autofocus
  • Outstanding photo quality
  • Good video stabilization
Cons
  • Video lags behind rivals
  • Uncomfortable to hold for long periods

Contributing reporter Steve Dent has been busy testing cameras to start the year. This week he added the Sony A7 V to the list, noting the excellent photo quality and accurate autofocus. “The A7 V is an incredible camera for photography, with speeds, autofocus accuracy and image quality ahead of rivals, including the Canon R6 III, Panasonic S1 II and Nikon Z6 III,” he said. “However, Sony isn’t keeping up with those models for video.”

Apple AirTag (2026)

Image for the large product module

Apple/Engadget

Apple has improved its Bluetooth tracker in practically every way, making it louder and extending its detection range.

Pros
Advertisement
  • Precise Finding is far more useful
  • Louder and easier to hear
  • Same price as the original AirTag
Cons
  • Still lacks a keyring hole
  • Apple’s AirTag accessories are too expensive

Our first Editors’ Choice device of 2026 is Apple’s updated AirTag. All of the upgrades lead to a better overall item tracker, according to UK bureau chief Mat Smith. “There’s no doubt the second-gen AirTags are improved, and thankfully, upgrading to the new capabilities doesn’t come at too steep a cost,” he concluded.

Source link

Continue Reading

Tech

Daily Deal: The Ultimate AWS Data Master Class Bundle

Published

on

from the good-deals-on-cool-stuff dept

The Ultimate AWS Data Master Class Bundle has 9 courses to get you up to speed on Amazon Web Services. The courses cover AWS, DevOPs, Kubernetes Mesosphere DC/OS, AWS Redshift, and more. It’s on sale for $40.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Filed Under: daily deal

Source link

Advertisement
Continue Reading

Tech

OpenAI launches centralized agent platform as enterprises push for multi-vendor flexibility

Published

on

OpenAI launched Frontier, a platform for building and governing enterprise AI agents, as companies increasingly question whether to commit to single-vendor systems or maintain multi-model flexibility.

The platform offers integrated tools for agent execution, evaluation, and governance in one place. But Frontier also reflects OpenAI’s push into enterprise AI at a moment when organizations are actively moving toward multi-vendor architectures — creating tension between OpenAI’s centralized approach and what enterprises say they want.

Tatyana Mamut, CEO of the agent observability company Wayfound, told VentureBeat that enterprises don’t want to be locked into a single vendor or platform because AI strategies are ever-evolving. 

“They’re not ready to fully commit. Everybody I talk to knows that eventually they’ll move to a one-size-fits-all solution, but right now, things are moving too fast for us to commit,” Mamut said. “This is the reason why most AI contracts are not traditional SaaS contracts; nobody is signing multi-year contracts anymore because if something great comes out next month, I need to be able to pivot, and I can’t be locked in.”

Advertisement

How Frontier compares to AWS Bedrock

OpenAI is not the first to offer an end-to-end platform for building, prototyping, testing, deploying, and monitoring agents. AWS launched Bedrock AgentCore with the idea that there will be enterprise customers who don’t want to assemble an extensive collection of tools and platforms for their agentic AI projects. 

However, AWS offers a significant advantage: access to multiple LLMs for building agents. Enterprises can choose a hybrid system in which an agent selects the best LLM for each task. OpenAI has not made it clear if it will open Frontier to models and tools from other vendors.

OpenAI did not say whether Frontier users can bring any third-party tools they already use to the platform, and it didn’t comment on why it chose to release Frontier now when enterprises are considering more hybrid systems.

But the company is working with companies including Clay, Abridge, Harvey, Decagon, Ambience, and Sierra to design solutions within Frontier. 

Advertisement

What is Frontier

Frontier is a single platform that offers access to different enterprise-grade tools from OpenAI. The company told VentureBeat that Frontier will not replace offerings such as the Agents SDK, AgentKit, or its suite of APIs. 

OpenAI said Frontier helps bring context, agent execution, and evaluation into a single platform rather than multiple systems and tools.

OpenAI Frontier flow chart

“Frontier gives agents the same skills people need to succeed at work: shared context, onboarding, hands-on learning with feedback, and clear permissions and boundaries. That’s how teams move beyond isolated use cases to AI co-workers that work across the business,” OpenAI said in a blog post.

Users can connect their data sources, CRM tools, and other internal applications directly to Frontier, effectively creating a semantic layer that normalizes permissions and retrieval logic for agents built on the platform to pull information from. Frontier has an agent executive environment, which can run on local environments, cloud infrastructures, or “OpenAI-hosted runtimes without forcing teams to reinvent how work gets done.”

Built-in evaluation structures, security, and governance dashboards allow teams to monitor agent behavior and performance. These give organizations visibility into their agents’ success rates, accuracy, and latency. OpenAI said Frontier incorporates its enterprise-grade data security layer, including the option for companies to choose where to store their data at rest.

Advertisement

Frontier launched with a small group of initial customers, including HP, Intuit, Oracle, State Farm, Thermo Fisher, and Uber.

Security and governance concerns

Frontier is available only to a select group of customers with wider availability coming soon. Enterprise providers are already weighing what the platform needs to address.

Ellen Boehm, senior vice president for IoT and AI Identity Innovation at Keyfactor, told VentureBeat that companies will still need to focus their agents on security and identity. 

“Agent platforms like OpenAI’s Frontier model are critical for democratizing AI adoption beyond the enterprise,” she said. “This levels the playing field — startups get enterprise-grade capabilities without enterprise-scale infrastructure, which means more innovation and healthier competition across the market. But accessible doesn’t mean you skip the fundamentals.” 

Advertisement

Salesforce AI executive vice president and GM Madhav Thattai, who is overseeing an agent builder and library platform at his company, noted that no matter the platform, enterprises need to focus agents on value.

“What we’re finding is that to build an agent that actually does something at scale that creates real ROI is pretty challenging,” Thattai said. “The true business value for enterprises doesn’t reside in the AI model alone — it’s in the ‘last mile.’”

“That is the software layer that translates raw technology into trusted, autonomous execution. To traverse this last mile, agents must be able to reason through complexity and operate on trusted business data, which is exactly where we are focusing.” 

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025