Connect with us

Tech

Andrew Ng: Unbiggen AI – IEEE Spectrum

Published

on

Andrew Ng has serious street cred in artificial intelligence. He pioneered the use of graphics processing units (GPUs) to train deep learning models in the late 2000s with his students at Stanford University, cofounded Google Brain in 2011, and then served for three years as chief scientist for Baidu, where he helped build the Chinese tech giant’s AI group. So when he says he has identified the next big shift in artificial intelligence, people listen. And that’s what he told IEEE Spectrum in an exclusive Q&A.


Ng’s current efforts are focused on his company
Landing AI, which built a platform called LandingLens to help manufacturers improve visual inspection with computer vision. He has also become something of an evangelist for what he calls the data-centric AI movement, which he says can yield “small data” solutions to big issues in AI, including model efficiency, accuracy, and bias.


Andrew Ng
on…

The great advances in deep learning over the past decade or so have been powered by ever-bigger models crunching ever-bigger amounts of data. Some people argue that that’s an unsustainable trajectory. Do you agree that it can’t go on that way?

Advertisement

Andrew Ng: This is a big question. We’ve seen foundation models in NLP [natural language processing]. I’m excited about NLP models getting even bigger, and also about the potential of building foundation models in computer vision. I think there’s lots of signal to still be exploited in video: We have not been able to build foundation models yet for video because of compute bandwidth and the cost of processing video, as opposed to tokenized text. So I think that this engine of scaling up deep learning algorithms, which has been running for something like 15 years now, still has steam in it. Having said that, it only applies to certain problems, and there’s a set of other problems that need small data solutions.

When you say you want a foundation model for computer vision, what do you mean by that?

Ng: This is a term coined by Percy Liang and some of my friends at Stanford to refer to very large models, trained on very large data sets, that can be tuned for specific applications. For example, GPT-3 is an example of a foundation model [for NLP]. Foundation models offer a lot of promise as a new paradigm in developing machine learning applications, but also challenges in terms of making sure that they’re reasonably fair and free from bias, especially if many of us will be building on top of them.

What needs to happen for someone to build a foundation model for video?

Advertisement

Ng: I think there is a scalability problem. The compute power needed to process the large volume of images for video is significant, and I think that’s why foundation models have arisen first in NLP. Many researchers are working on this, and I think we’re seeing early signs of such models being developed in computer vision. But I’m confident that if a semiconductor maker gave us 10 times more processor power, we could easily find 10 times more video to build such models for vision.

Having said that, a lot of what’s happened over the past decade is that deep learning has happened in consumer-facing companies that have large user bases, sometimes billions of users, and therefore very large data sets. While that paradigm of machine learning has driven a lot of economic value in consumer software, I find that that recipe of scale doesn’t work for other industries.

Back to top

It’s funny to hear you say that, because your early work was at a consumer-facing company with millions of users.

Advertisement

Ng: Over a decade ago, when I proposed starting the Google Brain project to use Google’s compute infrastructure to build very large neural networks, it was a controversial step. One very senior person pulled me aside and warned me that starting Google Brain would be bad for my career. I think he felt that the action couldn’t just be in scaling up, and that I should instead focus on architecture innovation.

“In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.”
—Andrew Ng, CEO & Founder, Landing AI

I remember when my students and I published the first
NeurIPS workshop paper advocating using CUDA, a platform for processing on GPUs, for deep learning—a different senior person in AI sat me down and said, “CUDA is really complicated to program. As a programming paradigm, this seems like too much work.” I did manage to convince him; the other person I did not convince.

I expect they’re both convinced now.

Advertisement

Ng: I think so, yes.

Over the past year as I’ve been speaking to people about the data-centric AI movement, I’ve been getting flashbacks to when I was speaking to people about deep learning and scalability 10 or 15 years ago. In the past year, I’ve been getting the same mix of “there’s nothing new here” and “this seems like the wrong direction.”

Back to top

How do you define data-centric AI, and why do you consider it a movement?

Advertisement

Ng: Data-centric AI is the discipline of systematically engineering the data needed to successfully build an AI system. For an AI system, you have to implement some algorithm, say a neural network, in code and then train it on your data set. The dominant paradigm over the last decade was to download the data set while you focus on improving the code. Thanks to that paradigm, over the last decade deep learning networks have improved significantly, to the point where for a lot of applications the code—the neural network architecture—is basically a solved problem. So for many practical applications, it’s now more productive to hold the neural network architecture fixed, and instead find ways to improve the data.

When I started speaking about this, there were many practitioners who, completely appropriately, raised their hands and said, “Yes, we’ve been doing this for 20 years.” This is the time to take the things that some individuals have been doing intuitively and make it a systematic engineering discipline.

The data-centric AI movement is much bigger than one company or group of researchers. My collaborators and I organized a
data-centric AI workshop at NeurIPS, and I was really delighted at the number of authors and presenters that showed up.

You often talk about companies or institutions that have only a small amount of data to work with. How can data-centric AI help them?

Advertisement

Ng: You hear a lot about vision systems built with millions of images—I once built a face recognition system using 350 million images. Architectures built for hundreds of millions of images don’t work with only 50 images. But it turns out, if you have 50 really good examples, you can build something valuable, like a defect-inspection system. In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.

When you talk about training a model with just 50 images, does that really mean you’re taking an existing model that was trained on a very large data set and fine-tuning it? Or do you mean a brand new model that’s designed to learn only from that small data set?

Ng: Let me describe what Landing AI does. When doing visual inspection for manufacturers, we often use our own flavor of RetinaNet. It is a pretrained model. Having said that, the pretraining is a small piece of the puzzle. What’s a bigger piece of the puzzle is providing tools that enable the manufacturer to pick the right set of images [to use for fine-tuning] and label them in a consistent way. There’s a very practical problem we’ve seen spanning vision, NLP, and speech, where even human annotators don’t agree on the appropriate label. For big data applications, the common response has been: If the data is noisy, let’s just get a lot of data and the algorithm will average over it. But if you can develop tools that flag where the data’s inconsistent and give you a very targeted way to improve the consistency of the data, that turns out to be a more efficient way to get a high-performing system.

“Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.”
—Andrew Ng

Advertisement

For example, if you have 10,000 images where 30 images are of one class, and those 30 images are labeled inconsistently, one of the things we do is build tools to draw your attention to the subset of data that’s inconsistent. So you can very quickly relabel those images to be more consistent, and this leads to improvement in performance.

Could this focus on high-quality data help with bias in data sets? If you’re able to curate the data more before training?

Ng: Very much so. Many researchers have pointed out that biased data is one factor among many leading to biased systems. There have been many thoughtful efforts to engineer the data. At the NeurIPS workshop, Olga Russakovsky gave a really nice talk on this. At the main NeurIPS conference, I also really enjoyed Mary Gray’s presentation, which touched on how data-centric AI is one piece of the solution, but not the entire solution. New tools like Datasheets for Datasets also seem like an important piece of the puzzle.

One of the powerful tools that data-centric AI gives us is the ability to engineer a subset of the data. Imagine training a machine-learning system and finding that its performance is okay for most of the data set, but its performance is biased for just a subset of the data. If you try to change the whole neural network architecture to improve the performance on just that subset, it’s quite difficult. But if you can engineer a subset of the data you can address the problem in a much more targeted way.

Advertisement

When you talk about engineering the data, what do you mean exactly?

Ng: In AI, data cleaning is important, but the way the data has been cleaned has often been in very manual ways. In computer vision, someone may visualize images through a Jupyter notebook and maybe spot the problem, and maybe fix it. But I’m excited about tools that allow you to have a very large data set, tools that draw your attention quickly and efficiently to the subset of data where, say, the labels are noisy. Or to quickly bring your attention to the one class among 100 classes where it would benefit you to collect more data. Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.

For example, I once figured out that a speech-recognition system was performing poorly when there was car noise in the background. Knowing that allowed me to collect more data with car noise in the background, rather than trying to collect more data for everything, which would have been expensive and slow.

Back to top

Advertisement

What about using synthetic data, is that often a good solution?

Ng: I think synthetic data is an important tool in the tool chest of data-centric AI. At the NeurIPS workshop, Anima Anandkumar gave a great talk that touched on synthetic data. I think there are important uses of synthetic data that go beyond just being a preprocessing step for increasing the data set for a learning algorithm. I’d love to see more tools to let developers use synthetic data generation as part of the closed loop of iterative machine learning development.

Do you mean that synthetic data would allow you to try the model on more data sets?

Ng: Not really. Here’s an example. Let’s say you’re trying to detect defects in a smartphone casing. There are many different types of defects on smartphones. It could be a scratch, a dent, pit marks, discoloration of the material, other types of blemishes. If you train the model and then find through error analysis that it’s doing well overall but it’s performing poorly on pit marks, then synthetic data generation allows you to address the problem in a more targeted way. You could generate more data just for the pit-mark category.

Advertisement

“In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models.”
—Andrew Ng

Synthetic data generation is a very powerful tool, but there are many simpler tools that I will often try first. Such as data augmentation, improving labeling consistency, or just asking a factory to collect more data.

Back to top

To make these issues more concrete, can you walk me through an example? When a company approaches Landing AI and says it has a problem with visual inspection, how do you onboard them and work toward deployment?

Advertisement

Ng: When a customer approaches us we usually have a conversation about their inspection problem and look at a few images to verify that the problem is feasible with computer vision. Assuming it is, we ask them to upload the data to the LandingLens platform. We often advise them on the methodology of data-centric AI and help them label the data.

One of the foci of Landing AI is to empower manufacturing companies to do the machine learning work themselves. A lot of our work is making sure the software is fast and easy to use. Through the iterative process of machine learning development, we advise customers on things like how to train models on the platform, when and how to improve the labeling of data so the performance of the model improves. Our training and software supports them all the way through deploying the trained model to an edge device in the factory.

How do you deal with changing needs? If products change or lighting conditions change in the factory, can the model keep up?

Ng: It varies by manufacturer. There is data drift in many contexts. But there are some manufacturers that have been running the same manufacturing line for 20 years now with few changes, so they don’t expect changes in the next five years. Those stable environments make things easier. For other manufacturers, we provide tools to flag when there’s a significant data-drift issue. I find it really important to empower manufacturing customers to correct data, retrain, and update the model. Because if something changes and it’s 3 a.m. in the United States, I want them to be able to adapt their learning algorithm right away to maintain operations.

Advertisement

In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models. The challenge is, how do you do that without Landing AI having to hire 10,000 machine learning specialists?

So you’re saying that to make it scale, you have to empower customers to do a lot of the training and other work.

Ng: Yes, exactly! This is an industry-wide problem in AI, not just in manufacturing. Look at health care. Every hospital has its own slightly different format for electronic health records. How can every hospital train its own custom AI model? Expecting every hospital’s IT personnel to invent new neural-network architectures is unrealistic. The only way out of this dilemma is to build tools that empower the customers to build their own models by giving them tools to engineer the data and express their domain knowledge. That’s what Landing AI is executing in computer vision, and the field of AI needs other teams to execute this in other domains.

Is there anything else you think it’s important for people to understand about the work you’re doing or the data-centric AI movement?

Advertisement

Ng: In the last decade, the biggest shift in AI was a shift to deep learning. I think it’s quite possible that in this decade the biggest shift will be to data-centric AI. With the maturity of today’s neural network architectures, I think for a lot of the practical applications the bottleneck will be whether we can efficiently get the data we need to develop systems that work well. The data-centric AI movement has tremendous energy and momentum across the whole community. I hope more researchers and developers will jump in and work on it.

Back to top

This article appears in the April 2022 print issue as “Andrew Ng, AI Minimalist.”

From Your Site Articles

Advertisement

Related Articles Around the Web

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

This British Car Combined Two Aircraft Engines For Nearly 1000 HP In The ’20s

Published

on





Carl Benz patented his squat, three-wheeled Benz Patent Motor Car (Model no. 1) in 1886, and it didn’t take long for humanity’s obsession with automobiles to take hold. In 40 short years, we went from a German one-cylinder four-stroke engine producing just 0.75 hp to a four-wheeled, British-made bullet powered by two 22.4-liter V12 Matabele airplane engines each producing 435 hp. The combo isn’t a big deal now, admittedly, with half a dozen production cars packing 1,000 horses or more, but it was certainly impressive for the 1920s. 

This behemoth, known as the Sunbeam 1,000 HP, was nearly 24 feet long and weighed 4 tons, yet it was the first car to go faster than 200 mph — exactly what it was made to do. Henry Segrave was at the wheel of the Sunbeam, sometimes referred to as “The Slug” or “Mystery,” when he broke that 200-mph barrier on March 29, 1927. Seagrave and The Slug achieved that milestone on the hard white sands of Daytona Beach, Florida, which had seen 30 years of record-breaking speed trials since racing began there in 1902, including Segrave’s successful attempt. 

Advertisement

The Sunbeam’s achievement came about 20 years after the first-ever 100-mph run, which took place on July 21, 1904. That year, Frenchman Louis Emile Rigolly hit 103.561 mph on a beach in Ostend, Belgium.

Advertisement

This was not your ordinary Slug

Sunbeam driver Henry Segrave had previously set a Land Speed Record almost exactly a year earlier, hitting 152.33 mph while driving a 4.0-liter Sunbeam Tiger, so he was very familiar with the need for speed. This new, more powerful Sunbeam 1000 was the brainchild of chief engineer and designer Louis Coatalen, who decided to place the two Matabele airplane engines in line. 

Both of the massive V12s had double overhead camshafts and 48 valves. The one sitting up front was mated to a custom-built three-speed gearbox, while the rear engine was connected to the back wheels via chain sprockets. Segrave was nestled tightly in between the beast’s metallic hearts, which had a wild past all of their own.

Both Matabele engines were built in 1918 and destined for World War I airplanes, but were never used. Two years later, they (along with two other engines) were dropped into a 39-foot single-step hydroplane (the Maple Leaf V) and used for powerboat racing. The following year, they were transferred to the 34-foot Maple Leaf VII and used again, although the boat sank on its first run. Both engines were recovered and sent back to the U.K., where they sat around until being used in the Sunbeam.

Advertisement

Ironically, the slug-like body of the Sunbeam actually resembled an upside-down boat in many ways, an intentional decision to improve aerodynamics. Additionally, it had a flat underbelly, with the idea that it would help the car slide along the beach if it lost a wheel, thus avoiding a major catastrophe.

Advertisement

The British beast comes back to life

Louis Coatalen developed the engine placement and internal workings, while Captain JA “Jack” Irving built the Mystery using a chassis from John Thompson Motor Pressings, steel forgings from Vickers, a set of special Hartford shock absorbers, and a braking system from Dewandre Vacuum. When driver Henry Segrave heard the beast roar for the first time, the car reportedly shook the Sunbeam Moorfield facility in Wolverhampton so hard that it convinced Segrave it couldn’t be driven. But drive the monster he did, achieving an average speed of 203.79 mph at Daytona Beach.

Records are made to be broken, and this one fell less than a year later when Malcolm Campbell drove another Sunbeam, known as the Blue Bird, to 206.956 mph at Daytona on February 19, 1928, becoming one of the many cars to hold the title of fastest in the world over the years. With its glory faded, the Sunbeam 1000 was parked and nearly forgotten for a time. Once rediscovered, it bounced around until it was eventually purchased by the Montagu Motor Museum in the United Kingdom (the forerunner to the National Motor Museum) in 1970.

A total refurbishment began in 2024, aiming to finish by March 2027, so it could be sent to Daytona Beach for the 100th anniversary of its land speed record. The fully rebuilt rear engine was fired up for the first time in 90 years in front of onlookers at the National Motor Museum in September 2025. Only time will tell whether the team behind the restoration can cross the finish line in Daytona in 2027.

Advertisement



Source link

Advertisement
Continue Reading

Tech

CISA warns feds to patch iOS flaws exploited in crypto-theft attacks

Published

on

CISA

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) ordered federal agencies to patch three iOS security flaws targeted in cyberespionage and crypto-theft attacks using the Coruna exploit kit.

As Google Threat Intelligence Group (GTIG) researchers revealed earlier this week, Coruna uses multiple exploit chains targeting 23 iOS vulnerabilities, many of which were deployed in zero-day attacks.

However, the exploits will not work on recent versions of iOS and will be blocked if the target is using private browsing or has enabled Apple’s Lockdown Mode anti-spyware protection feature.

Coruna provides threat actors with Pointer Authentication Code (PAC) bypass, sandbox escape, and PPL (Page Protection Layer) bypass capabilities, and enables them to gain WebKit remote code execution and escalate permissions to Kernel privileges on vulnerable devices.

Advertisement

GTIG observed the exploit kit being used by multiple threat actors last year, including a surveillance vendor customer, a suspected Russian state-backed hacking group (UNC6353), and a financially motivated Chinese threat actor (UNC6691).

The latter deployed it on fake gambling and crypto websites and used it to deliver a malware payload designed to steal infected victims’ cryptocurrency wallets.

Coruna attacks timeline
Coruna attacks timeline (GTIG)

Mobile security firm iVerify also said that Coruna is an example of “sophisticated spyware-grade capabilities” that migrated “from commercial surveillance vendors into the hands of nation-state actors and, ultimately, mass-scale criminal operations.”

On Thursday, CISA added three of the 23 Coruna vulnerabilities to its catalog of Known Exploited Vulnerabilities, ordering Federal Civilian Executive Branch (FCEB) agencies to secure their devices by March 26, as mandated by the Binding Operational Directive (BOD) 22-01.

“Apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable,” CISA warned.

Advertisement

“These types of vulnerabilities are frequent attack vectors for malicious cyber actors and pose significant risks to the federal enterprise.”

Although BOD 22-01 applies only to federal agencies, CISA urged all organizations, including private sector companies, to prioritize patching these flaws to secure their devices against attacks as soon as possible.

Malware is getting smarter. The Red Report 2026 reveals how new threats use math to detect sandboxes and hide in plain sight.

Download our analysis of 1.1 million malicious samples to uncover the top 10 techniques and see if your security stack is blinded.

Source link

Advertisement
Continue Reading

Tech

Oracle to cut ‘thousands’ of jobs, reports Bloomberg

Published

on

Oracle employs around 162,000 globally, with 900 workers situated in Ireland.

Oracle will cut thousands of jobs to funnel funds into its major AI data centre expansion efforts, according to Bloomberg.

The cuts will affect divisions across the company and may come as soon as this month, the publication said. Some of the cuts might target jobs that Oracle needs less due to AI.

Latest data shows that Oracle employs around 162,000 globally, with around 900 workers situated in Ireland.

Advertisement

Last September, the company revealed plans for its largest-ever restructuring, set to cost up to $1.6bn. At the time, Oracle’s Irish arm sent a collective redundancy notification to the Government.

SiliconRepublic.com has contacted Oracle for details on the latest layoffs and its effects in Ireland.

Oracle is one of the world’s largest cloud operators, having cemented itself as a leading AI infrastructure provider tapped by major cloud users, such as OpenAI.

OpenAI has promised Oracle $300bn for its compute power, but, as TechCrunch highlights, much of the promised spending is speculative and highly dependent on the companies’ growth.

Advertisement

Plus, data compiled by Bloomberg shows that Oracle will have negative cash flow on account of the data centre buildout until 2030. The massive AI expenditures have turned Oracle’s cash flow negative last year for the first time since 1992, noted the publication.

Early last month, Oracle said it plans to raise up to $50bn through debt and equity sales to build additional cloud capacity.

The Larry Ellison-led company is also pouring money into OpenAI as part of the major $500bn AI infrastructure build-out called Stargate, while a close relationship with the US government helped it towards a stake of 15pc of the new TikTok USDS entity, as well as control over the platform’s algorithm.

Oracle enjoyed strong investor support in the initial years of the AI boom, which boosted the company stock 61pc in 2024 and 20pc in 2025. The support briefly made Ellison the world’s richest man in September last year.

Advertisement

However, investors have been wary of massive AI spending in recent months, sending Oracle shares down 54pc since September.

Several Big Tech firms have laid off employees over the past year, including Microsoft, which axed thousands, Block, which is cutting around 40pc of its workforce, and Amazon, which has cut more than 30,000 jobs since October.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Larry Ellison, 2010. Image: Ilan Costica, via Wikimedia Commons (CC BY-SA 3.0)

Advertisement

Source link

Continue Reading

Tech

GoPro Lit Hero Review: a tiny action cam, with too many compromises

Published

on

Why you can trust TechRadar


We spend hours testing every product or service we review, so you can be sure you’re buying the best. Find out more about how we test.

GoPro Lit Hero: two-minute review

GoPro is a name that’s synonymous with the action cam market, with the brand having largely been responsible for the explosion in popularity of such cameras over the past two decades. The brand has come a long way since its first Hero camera, a 35mm film-compatible wearable model released in 2004.

Advertisement

Source link

Continue Reading

Tech

LangChain’s CEO argues that better models alone won’t get your AI agent to production

Published

on

As models get smarter and more capable, the “harnesses” around them must also evolve.

This “harness engineering” is an extension of context engineering, says LangChain co-founder and CEO Harrison Chase in a new VentureBeat Beyond the Pilot podcast episode. Whereas traditional AI harnesses have tended to constrain models from running in loops and calling tools, harnesses specifically built for AI agents allow them to interact more independently and effectively perform long-running tasks.

Chase also weighed in on OpenAI’s acquisition of OpenClaw, arguing that its viral success came down to a willingness to “let it rip” in ways that no major lab would — and questioning whether the acquisition actually gets OpenAI closer to a safe enterprise version of the product.

“The trend in harnesses is to actually give the large language model (LLM) itself more control over context engineering, letting it decide what it sees and what it doesn’t see,” Chase says. “Now, this idea of a long-running, more autonomous assistant is viable.”

Advertisement

Tracking progress and maintaining coherence

While the concept of allowing LLMs to run in a loop and call tools seems relatively simple, it’s difficult to pull off reliably, Chase noted. For a while, models were “below the threshold of usefulness” and simply couldn’t run in a loop, so devs used graphs and wrote chains to get around that. Chase pointed to AutoGPT — once the fastest-growing GitHub project ever — as a cautionary example: same architecture as today’s top agents, but the models weren’t good enough yet to run reliably in a loop, so it faded fast.

But as LLMs keep improving, teams can construct environments where models can run in loops and plan over longer horizons, and they can continually improve these harnesses. Previously, “you couldn’t really make improvements to the harness because you couldn’t actually run the model in a harness,” Chase said.

LangChain’s answer to this is Deep Agents, a customizable general-purpose harness.

Advertisement

Built on LangChain and LangGraph, it has planning capabilities, a virtual filesystem, context and token management, code execution, and skills and memory functions. Further, it can delegate tasks to subagents; these are specialized with different tools and configurations and can work in parallel. Context is also isolated, meaning subagent work doesn’t clutter the main agent’s context, and large subtask context is compressed into a single result for token efficiency.

All of these agents have access to file systems, Chase explained, and can essentially create to-do lists that they can execute on and track over time.

“When it goes on to the next step, and it goes on to step two or step three or step four out of a 200 step process, it has a way to track its progress and keep that coherence,” Chase said. “It comes down to letting the LLM write its thoughts down as it goes along, essentially.”

He emphasized that harnesses should be designed so that models can maintain coherence over longer tasks, and be “amenable” to models deciding when to compact context at points it determines is “advantageous.”

Advertisement

Also, giving agents access to code interpreters and BASH tools increases flexibility. And, providing agents with skills as opposed to just tools loaded up front allows them to load information when they need it. “So rather than hard code everything into one big system prompt,” Chase explained, “you could have a smaller system prompt, ‘This is the core foundation, but if I need to do X, let me read the skill for X. If I need to do Y, let me read the skill for Y.’”

Essentially, context engineering is a “really fancy” way of saying: What is the LLM seeing? Because that’s different from what developers see, he noted. When human devs can analyze agent traces, they can put themselves in the AI’s “mindset” and answer questions like: What is the system prompt? How is it created? Is it static or is it populated? What tools does the agent have? When it makes a tool call, and gets a response back, how is that presented?

“When agents mess up, they mess up because they don’t have the right context; when they succeed, they succeed because they have the right context,” Chase said. “I think of context engineering as bringing the right information in the right format to the LLM at the right time.”

Listen to the podcast to hear more about:

Advertisement
  • How LangChain built its stack: LangGraph as the core pillar, LangChain at the center, Deep Agents on top.

  • Why code sandboxes will be the next big thing.

  • How a different type of UX will evolve as agents run at longer intervals (or continuously).

  • Why traces and observability are core to building an agent that actually works.

You can also listen and subscribe to Beyond the Pilot on Spotify, Apple or wherever you get your podcasts.

Source link

Continue Reading

Tech

Iceland Foods Finally Surrenders In Trademark Fight With Iceland, The Country

Published

on

from the who’s-the-moron-in-a-hurry-here dept

The ten year war over Iceland is over and Iceland has come out the victor.

If you don’t know what I’m talking about, be prepared to listen to a whole bunch of stupid. In 2016, we wrote about Iceland Foods, a UK grocer, which had somehow convinced the EU to give it a trademark for “Iceland” and which then went about bullying other companies and opposing trademarks for any that included the name of that country. One of the entities that Iceland Foods found itself in a trademark opposition with was Iceland, as in the country, when it attempted to trademark “Inspired by Iceland.” The Icelandic government didn’t take too kindly to that appropriation of its own name and petitioned to cancel the Iceland Foods trademark, which is exactly what happened. Rather than put an end to this absurdity, Iceland Foods appealed that decision, lost, then appealed it again, lost again, appealed a third time, only to lose there as well.

From there, Iceland Foods had but one final option for appealing all of these perfectly sane rulings, which would be to take this before the Court of Justice of the EU. And, while that would obviously be crazy, everything I’d seen to date led me to believe the grocer would do just that.

But sanity seems to finally be on the menu, I guess. Iceland Foods has publicly announced that it is ending the fight and surrendering.

Advertisement

Executive chairman Richard Walker revealed the supermarket would drop the legal dispute, which centres on the right to use the phrase Iceland in the EU, following its third legal loss in July 2025.

Iceland had one fourth and final route of appeal, via the Court of Justice of the European Union, but Walker told the Financial Times it would instead use the “couple of hundred grand” it would save in legal fees to give a “rapprochement discount” to Icelandic shoppers.

Yeah, that’s how this should have been approached from the jump, folks. And this actually goes back even further, where this broad, geographic trademark by a private entity consisting of the name of a sovereign nation never should have been granted a trademark to begin with.

But that’s all over now. Iceland Foods’ trademark is invalidated. Iceland once more is free from being bullied over its own name, as would be other companies from the island nation. Iceland Foods can keep on operating as it always has, sans the ability to bully others with this ridiculous mark. Walker himself said as much, in a very frustrating manner.

“We lost for a third time. We’re going to throw in the towel,” Walker told the FT. “It’s actually fine — we don’t have to change our name.”

Exactly. You never had to. That was never in question. The only question is whether you got to keep your laughable trademark and bully others over it.

Advertisement

Instead, the grocer wasted everyone’s time, and who knows how much of its own money, trying to wage this silly war.

Filed Under: cjeu, iceland, iceland iceland iceland, trademark, uk

Companies: iceland foods

Source link

Advertisement
Continue Reading

Tech

Reverse Engineering The PROM For The SGI O2

Published

on

The SGI O2 was SGI’s last-ditch attempt at a low-end MIPS-based workstation back in 1996, and correspondingly didn’t use the hottest parts of the time, nor did it offer much of an upgrade path. None of which is a concern to hobbyists who are more than happy to work around any hardware- and software limitations to e.g. install much faster CPUs. While quite a few CPU upgrades were possible with just some BGA chip reworking skills, installing the 900 MHz RM7900 would require some PROM hacking, which [mattst88] recently took a shake at.

The initial work on upgrading SGI O2 systems was done in the early 2000s, with [Joe Page] and [Ian Mapleson] running into the issue that these higher frequency MIPS CPUs required a custom IP32 PROM image, for which they figured that they’d need either SGI’s help or do some tricky reverse-engineering. Since SGI is no longer around, [mattst88] decided to take up the torch.

After downloading a 512 kB binary dump of the last version of the O2’s PROM, he set to work reverse-engineering it, starting by dissembling the file. A big part of understanding MIPS PROM code is understanding how the MIPS architecture works, including its boot process, so much of what followed was a crash-course on the subject.

Advertisement

With that knowledge it was much easier to properly direct the Capstone disassembler and begin the arduous process of making sense of the blob of data and code. The resulting source files now reassemble into bit-identical ROM files, which makes it likely that modifying it to support different CPUs is now possible with just a bit more work.

For those who want to play along, [mattst88] has made his ip32prom-decompiler project available on GitHub.

Thanks to [adistuder] for the tip.


Top image: Silicon Graphics 1600SW LCD display and O2 workstation. (Source: Wikimedia)

Advertisement

Source link

Advertisement
Continue Reading

Tech

Apple is adding a warning against AI music content

Published

on

Apple Music is introducing a new way to flag AI-generated music. However, it’s relying on the music industry itself to disclose it.

As reported by Music Business Worldwide, the streaming service has launched Transparency Tags, a new metadata system that allows record labels and distributors to mark when artificial intelligence has been used in different parts of a release.

The tags can be applied immediately. Eventually, they will become a requirement when partners deliver new content to the platform.

Rather than analysing songs itself, Apple is placing the responsibility on the supply chain. Labels and distributors will decide whether a track or release qualifies as AI-generated. They will apply the tags during the delivery process – this is similar to how genres or credits are currently submitted.

Advertisement

The system covers four areas of a release. Artwork tags flag when AI is used to create album artwork or other visuals. Track tags indicate that AI helped generate the sound recording itself. Composition tags apply when lyrics or other songwriting elements are created using AI. Meanwhile, Music Video tags identify AI-generated visuals tied to releases.

Advertisement

Apple says the goal is to give the industry better visibility into how generative AI is being used in music production. In a note to industry partners, the company described the tags as a “first step”. This is toward building clearer policies and best practices around AI-created content.

The approach stands in contrast to how some rivals are tackling the issue. Streaming platform Deezer, for example, has built its own AI detection system. It scans uploads automatically rather than relying on labels to self-report.

Advertisement

That difference matters given how quickly AI-generated music is growing. Deezer said earlier this year that it now receives more than 60,000 fully AI-generated tracks every day. Synthetic music now accounts for roughly 39% of all uploads to the platform.

The company also claims most of that content is tied to streaming fraud rather than artistic experimentation. According to Deezer, up to 85% of streams on AI-generated tracks were fraudulent in 2025. Those plays were removed from the royalty pool.

Apple’s Transparency Tags don’t currently include a visible enforcement mechanism or verification system. This means the accuracy of the labels will largely depend on the honesty of the distributors supplying the music.

Advertisement

Advertisement

For now, though, Apple’s move signals that AI disclosure is quickly becoming the next battleground for music streaming platforms.

Source link

Advertisement
Continue Reading

Tech

UL and IMR to design Ireland’s first 3D-printed liquid rocket engine

Published

on

The partnership news comes with official acceptance into the prestigious UK-based Race2Space 2026 International Propulsion competition.

The University of Limerick (UL) Aeronautical Society High-Powered Rocketry Team (ULAS HiPR) has announced a partnership with UL and Irish Manufacturing Research (IMR) to design and produce the first additive manufactured (3D-printed) liquid rocket engine in the Republic of Ireland, called the Lúin of Celtchar.

The engine is a high-performance 2 kilonewton, water-cooled, IPA/nitrous oxide bi-propellant system, which has been designed entirely by the ULAS HiPR student team and is now being manufactured at IMR’s Advanced Manufacturing Lab in Mullingar using metal additive manufacturing. It will be returned to UL for precision machining and assembly. 

Established in 2022, ULAS HiPR has more than 100 members and is a combination of students from a range of disciplines, such as aeronautical, mechanical, software and design engineering – all of whom have an interest in designing, manufacturing and launching powerful rockets. 

Advertisement

The team has enjoyed some success having represented Ireland internationally at prestigious competitions, including Mach-24 and Euroc, the European Rocketry Challenge. Alongside the announcement of the partnership, ULAS HiPR has also officially been accepted into the UK-based Race2Space 2026 International Propulsion competition.

This is, according to ULAS HiPR, “a major milestone in advancing Irish student-led space propulsion capabilities”.

Speaking on the announcement, Jay Looney, the co-head of ULAS HiPR, said: “The acceptance of our project to Race2Space marks a defining moment not only for ULAS HiPR, but for Ireland’s student space community. 

“The selection of the first additively manufactured liquid rocket engine in the Republic of Ireland into the competition validates the technical ambition of our student team, and the strength of collaboration between Irish university students with industry. It demonstrates that world-class propulsion innovation can now be designed, manufactured and tested entirely here in Ireland.”

Advertisement

Mark Hartnett, a design for manufacturing senior technologist at IMR, added: “At IMR, supporting ambitious student teams like ULAS HiPR reflects our commitment to strengthening Ireland’s advanced manufacturing ecosystem and enabling the next generation of aerospace innovators. 

“These are vital platforms for advancing cutting-edge technologies and building Ireland’s future engineering capability, and this ULAS HiPR propulsion project demonstrates how emerging technologies can move rapidly from concept to high-performance hardware.”

In late February, Silicon Republic attended the official launch of Ireland’s first European Space Agency Phi-Lab, which is headquartered at IMR in Mullingar and run in collaboration with the AMBER Centre at Trinity College Dublin.

One of 10 European Phi-Labs, it is designed to be Ireland’s national platform for space technology development and to anchor the country’s ambitions within Europe and the world’s rapidly-expanding space economy.

Advertisement

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

Humanity Heating Planet Faster Than Ever Before, Study Finds

Published

on

An anonymous reader The Guardian: Humanity is heating the planet faster than ever before, a study has found. Climate breakdown is occurring more rapidly with the heating rate almost doubling, according to research that excludes the effect of natural factors behind the latest scorching temperatures. It found global heating accelerated from a steady rate of less than 0.2C per decade between 1970 and 2015 to about 0.35C per decade over the past 10 years. The rate is higher than scientists have seen since they started systematically taking the Earth’s temperature in 1880.

“If the warming rate of the past 10 years continues, it would lead to a long-term exceedance of the 1.5C (2.7F) limit of the Paris agreement before 2030,” said Stefan Rahmstorf, a scientist at the Potsdam Institute for Climate Impact Research and co-author of the study. […] The researchers applied a noise-reduction method to filter out the estimated effect of nonhuman factors in five major datasets that scientists have compiled to gauge the Earth’s temperature. In each of them, they found an acceleration in global heating emerged in 2013 or 2014. The findings have been published in the journal Geophysical Research Letters.

Source link

Continue Reading

Trending

Copyright © 2025