Connect with us
DAPA Banner

Tech

Microsoft’s new AI training method eliminates bloated system prompts without sacrificing model performance

Published

on

In building LLM applications, enterprises often have to create very long system prompts to adjust the model’s behavior for their applications. These prompts contain company knowledge, preferences, and application-specific instructions. At enterprise scale, these contexts can push inference latency past acceptable thresholds and drive per-query costs up significantly. 

On-Policy Context Distillation (OPCD), a new training framework proposed by researchers at Microsoft, helps bake the knowledge and preferences of applications directly into a model. OPCD uses the model’s own responses during training, which avoids some of the pitfalls of other training techniques. This improves the abilities of models for bespoke applications while preserving their general capabilities. 

Why long system prompts become a liability

In-context learning allows developers to update a model’s behavior at inference time without modifying its underlying parameters. Updating parameters is typically a slow and expensive process. However, in-context knowledge is transient. This knowledge does not carry across different conversations with the model, meaning you have to feed the model the exact same massive set of instructions or documents every time. For an enterprise application, this might mean repeatedly pasting company policies, customer tickets, or dense technical manuals into the prompt. This eventually slows down the model, drives up costs, and can confuse the system.

“Enterprises often use long system prompts to enforce safety constraints (e.g., hate speech detection) or to provide domain-specific expertise (e.g., medical knowledge),” said Tianzhu Ye, co-author of the paper and researcher at Microsoft Research Asia, in comments provided to VentureBeat. “However, lengthy prompts significantly increase computational overhead and latency at inference time.”

Advertisement

The main idea behind context distillation is to train a model to internalize the information that you repeatedly insert into the context. Like other distillation techniques, it follows a teacher-student paradigm. The teacher is an AI model that receives the massive, detailed prompt. Because it has all the instructions and reference documents, it generates highly tailored responses. The student is a model being trained that only sees the main question and doesn’t have access to the full context. Its goal is simply to observe the teacher’s responses and learn to mimic its behavior.

Through this training process, the student model effectively compresses the complex instructions from the teacher’s prompt directly into its parameters. For an enterprise, the primary value happens at inference time. Because the student model has internalized the context, you can deploy it in your application without needing to paste in the lengthy instructions again. This makes the model significantly faster and with far less computational overhead.

context distillation

However, classic context distillation relies on a flawed training method called “off-policy training,” where the model is trained on fixed datasets that were collected before the training process. This is problematic in several ways. During training, the student is only exposed to ground-truth data and teacher-generated answers, creating what Ye calls “exposure bias.” In production, the model must come up with its own token sequences to reach those answers. Because it never practiced making its own decisions or recovering from its own mistakes during training, it can easily derail when operating independently. It’s like showing a student videos of a professional driver and expecting them to learn driving without trial and error.

Another problem is the “forward Kullback-Leibler (KL) divergence” minimization measure used to train the model. Under this method, the model is graded on how similar its answers are to the teacher, which encourages “mode-covering” behavior, Ye says. The student model is often smaller or lacks the rich context the teacher had, meaning it simply lacks the capacity to perfectly replicate the teacher’s complex reasoning. Because the student is forced to try and cover all those possibilities anyway, its underlying guesses become overly broad and unfocused.

In real-world applications, this can result in hallucinations, where the AI gets confused and confidently makes things up because it is trying to mimic a depth of knowledge it does not actually possess. It also means that the model cannot generalize well to new tasks.

Advertisement

How OPCD fixes the teacher-student problem

To fix the critical issues with the old teacher-student dynamic, the Microsoft researchers introduced On-Policy Context Distillation (OPCD). The most important shift in OPCD is that the student model learns from its own generation trajectories as opposed to a static dataset (which is why it is called “on-policy”). Instead of passively studying a dataset of the teacher’s perfect outputs, the student is given a task without seeing the massive instruction prompt and has to generate an answer entirely on its own.

As the student generates its answer, the teacher acts as a live instructor. The teacher has access to the full, customized prompt and evaluates the student’s output. At every step along the student’s generation, the system compares the student’s token distribution against what the context-aware teacher would do.

on-policy context distillation

On-policy context distillation

OPCD uses “reverse KL divergence” to grade the student. “By minimizing reverse KL divergence, it promotes ‘mode-seeking’ behavior. It focuses on high-probability regions of the student’s distribution,” Ye said. “It suppresses tokens that the student considers unlikely, even if the teacher’s belief assigned them high probability. This alignment helps the student correct its own mistakes and avoid the broad, hallucinatory distributions of standard distillation.”

Advertisement

Because the student model actively practices making its own decisions and learns to correct its own mistakes during training, it behaves more reliably when deployed in a live application. It successfully bakes complex business rules, safety constraints, or specialized knowledge directly into its permanent memory.

What OPCD delivers: The benchmark results

The researchers tested OPCD in two key areas: experiential knowledge distillation and system prompt distillation. For experiential knowledge distillation, the researchers wanted to see if an LLM could learn from its own past successes and permanently adopt those lessons. They tested this on models of various sizes, using mathematical reasoning problems.

First, the model solved problems and was asked to write down general rules it learned from its successes. Then, using OPCD, they baked those written lessons directly into the model’s parameters. The results showed that the models improved dramatically without needing the learned experience pasted into their prompts anymore. On complex math problems, an 8-billion-parameter model improved from a 75.0% baseline to 80.9%. For example, on the Frozen Lake navigation game, a small 1.7-billion parameter model initially had a success rate of 6.3%. After OPCD baked in the learned experience, its accuracy jumped to 38.3%.

The second set of experiments were on long system prompts. Enterprises often use massive system prompts to enforce strict behavioral guidelines, like maintaining a professional tone, ensuring medical accuracy, or filtering out toxic language. The researchers tested whether OPCD could permanently bake these dense behavioral rules into the models so they would not have to be sent with every single user query. Their experiments show that OPCD successfully internalized these complex rules and massively boosted performance. When testing a 3-billion parameter Llama model on safety and toxicity classification, the base model scored 30.7%. After using OPCD to internalize the safety prompt, its accuracy spiked to 83.1%. On medical question answering, the same model improved from 59.4% to 76.3%.

Advertisement

One of the key challenges of fine-tuning models is catastrophic forgetting, where the model becomes too focused on the fine-tune task and worse at general tasks. The researchers tracked out-of-distribution performance to test for this tunnel vision. When they distilled strict safety rules into a model, they immediately tested its ability to answer unrelated medical questions. OPCD successfully maintained the model’s general medical knowledge, outperforming the old off-policy methods by approximately 4 percentage points. It specialized without losing its broader intelligence.

Where OPCD fits — and where it doesn’t

While OPCD is a powerful tool for internalizing static knowledge and complex rules, it does not replace all external context methods. “RAG is better when the required information is highly dynamic or involves a massive, frequently updated external database that cannot be compressed into model weights,” Ye said.

For enterprise teams evaluating their pipelines, adopting OPCD does not require overhauling existing systems or investing in specialized hardware. “OPCD can be integrated into existing workflows with very little friction,” Ye said. “Any team already running standard RLVR [Reinforcement Learning from Verifiable Rewards] pipelines can adopt OPCD without major architectural changes.”

In practice, the student model acts as the policy model performing rollouts, while the frozen teacher model serves as a reference providing logits. The hardware requirements are highly accessible. According to Ye, enterprise teams can reproduce the researchers’ experiments using about eight A100 GPUs.

Advertisement

The data requirements are similarly lightweight. For experiential knowledge distillation, developers only need around 30 seed examples to generate solution traces. Because the technique is applied to previously unoptimized environments, even a small amount of data yields the majority of the performance improvement. For system prompt distillation, existing optimized prompts and standard task datasets are sufficient.

The researchers built their own implementation on verl, an open-source RLVR codebase, proving that the technique fits cleanly within conventional reinforcement learning frameworks. They plan to release their implementation as open source following internal reviews.

The self-improving model: What comes next

Looking ahead, OPCD paves the way for genuinely self-improving models that continuously adapt to bespoke enterprise environments. Once deployed, a model can extract lessons from real-world interactions and use OPCD to progressively internalize those characteristics without requiring manual supervision or data annotation from model trainers.

“This represents a fundamental paradigm shift in model improvement: the core improvements to the model would move from training time to test time,” Ye said. “Using the model—and allowing it to gather experience—would become the primary driver of its advancement.”

Advertisement

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Five signs data drift is already undermining your security models

Published

on

Data drift happens when the statistical properties of a machine learning (ML) model’s input data change over time, eventually rendering its predictions less accurate. Cybersecurity professionals who rely on ML for tasks like malware detection and network threat analysis find that undetected data drift can create vulnerabilities. A model trained on old attack patterns may fail to see today’s sophisticated threats. Recognizing the early signs of data drift is the first step in maintaining reliable and efficient security systems.

Why data drift compromises security models

ML models are trained on a snapshot of historical data. When live data no longer resembles this snapshot, the model’s performance dwindles, creating a critical cybersecurity risk. A threat detection model may generate more false negatives by missing real breaches or create more false positives, leading to alert fatigue for security teams.

Adversaries actively exploit this weakness. In 2024, attackers used echo-spoofing techniques to bypass email protection services. By exploiting misconfigurations in the system, they sent millions of spoofed emails that evaded the vendor’s ML classifiers. This incident demonstrates how threat actors can manipulate input data to exploit blind spots. When a security model fails to adapt to shifting tactics, it becomes a liability.

5 indicators of data drift

Security professionals can recognize the presence of drift (or its potential) in several ways.

Advertisement

1. A sudden drop in model performance

Accuracy, precision, and recall are often the first casualties. A consistent decline in these key metrics is a red flag that the model is no longer in sync with the current threat landscape.

Consider Klarna’s success: Its AI assistant handled 2.3 million customer service conversations in its first month and performed work equivalent to 700 agents. This efficiency drove a 25% decline in repeat inquiries and reduced resolution times to under two minutes.

Now imagine if those parameters suddenly reversed because of drift. In a security context, a similar drop in performance does not just mean unhappy clients — it also means successful intrusions and potential data exfiltration.

2. Shifts in statistical distributions

Security teams should monitor the core statistical properties of input features, such as the mean, median, and standard deviation. A significant change in these metrics from training data could indicate the underlying data has changed.

Advertisement

Monitoring for such shifts enables teams to catch drift before it causes a breach. For example, a phishing detection model might be trained on emails with an average attachment size of 2MB. If the average attachment size suddenly jumps to 10MB due to a new malware-delivery method, the model may fail to classify these emails correctly.

3. Changes in prediction behavior

Even if overall accuracy seems stable, distributions of predictions might change, a phenomenon often referred to as prediction drift.

For instance, if a fraud detection model historically flagged 1% of transactions as suspicious but suddenly starts flagging 5% or 0.1%, either something has shifted or the nature of the input data has changed. It might indicate a new type of attack that confuses the model or a change in legitimate user behavior that the model was not trained to identify.

4. An increase in model uncertainty

For models that provide a confidence score or probability with their predictions, a general decrease in confidence can be a subtle sign of drift.

Advertisement

Recent studies highlight the value of uncertainty quantification in detecting adversarial attacks. If the model becomes less sure about its forecasts across the board, it is likely facing data it was not trained on. In a cybersecurity setting, this uncertainty is an early sign of potential model failure, suggesting the model is operating in unfamiliar ground and that its decisions might no longer be reliable.

5. Changes in feature relationships

The correlation between different input features can also change over time. In a network intrusion model, traffic volume and packet size might be highly linked during normal operations. If that correlation disappears, it can signal a change in network behavior that the model may not understand. A sudden feature decoupling could indicate a new tunneling tactic or a stealthy exfiltration attempt.

Approaches to detecting and mitigating data drift

Common detection methods include the Kolmogorov-Smirnov (KS) and the population stability index (PSI). These compare the distributions of live and training data to identify deviations. The KS test determines if two datasets differ significantly, while the PSI measures how much a variable’s distribution has shifted over time. 

The mitigation method of choice often depends on how the drift manifests, as distribution changes may occur suddenly. For example, customers’ buying behavior may change overnight with the launch of a new product or a promotion. In other cases, drift may occur gradually over a more extended period. That said, security teams must learn to adjust their monitoring cadence to capture both rapid spikes and slow burns. Mitigation will involve retraining the model on more recent data to reclaim its effectiveness.

Advertisement

Proactively manage drift for stronger security

Data drift is an inevitable reality, and cybersecurity teams can maintain a strong security posture by treating detection as a continuous and automated process. Proactive monitoring and model retraining are fundamental practices to ensure ML systems remain reliable allies against developing threats.

Zac Amos is the Features Editor at ReHack.

Welcome to the VentureBeat community!

Our guest posting program is where technical experts share insights and provide neutral, non-vested deep dives on AI, data infrastructure, cybersecurity and other cutting-edge technologies shaping the future of enterprise.

Advertisement

Read more from our guest post program — and check out our guidelines if you’re interested in contributing an article of your own!

Source link

Continue Reading

Tech

ESPN on Disney Plus Is Expanding to More Countries

Published

on

More people will be able to watch ESPN programming through Disney Plus with Tuesday’s launch of ESPN on Disney Plus in Europe and select Asia-Pacific markets. 

With expansion into more than 50 countries and territories in those regions, people in 100 markets worldwide can now stream ESPN content through Disney Plus, according to a Disney Plus news release. The offering brings live sporting events and studio shows together with general entertainment and family programming in a single app.

In markets including Japan, Korea, Singapore, Taiwan and Hong Kong, a curated selection of English‑language ESPN sports programming is now available on Disney Plus, according to the release. Disney Plus also said, “the initial [ESPN on Disney Plus] offering will vary by market but will grow to thousands of live events over the next year.” 

Advertisement

Programming includes US coverage of the NBA and NHL starting with the 2026-27 season, college sports and more live events. Disney Plus subscribers can watch ESPN’s 30 for 30 documentary collection and select studio shows.

Pre-existing sports content on Disney Plus in Europe includes the UEFA Women’s Champions League, La Liga in the UK and Ireland and the Copa del Rey, UEFA Europa League, UEFA Conference League and DFB Pokal in the Nordic countries, according to Disney Plus.

Watch this: Your Phone is Disgusting: Let’s Fix That

People in Europe and select Asia-Pacific markets just need a Disney Plus subscription to watch ESPN content on Disney Plus. In the US, Disney Plus standalone subscribers can access a curated selection of live sports events, studio shows, and ESPN films, but must subscribe to Disney Plus and ESPN Unlimited to watch all available ESPN programming on the platform.

Advertisement

The ESPN on Disney Plus offering is also available to people in Latin America, the Caribbean, Australia and New Zealand.

Source link

Advertisement
Continue Reading

Tech

Amazon’s Fire TVs risk being left in the doldrums by Hisense and TCL’s Mini LEDs

Published

on

I’ve reviewed a few Amazon Fire TV Series models over the last few years, and generally, I’ve found them to be solid enough TVs.

I’ve always had the suspicion that they could be better for picture quality, and certainly a little less expensive, but then when Amazon’s sales event comes around, the TVs fall to prices that are verging on impulse buy if you want a cheap TV.

I don’t think you could say the same about Amazon’s TVs now.

Having reviewed the newest Fire TV 4-Series, I found it underwhelming. The problems were multiple. For one, it didn’t seem to be a big enough upgrade on the previous generation, at least from a performance perspective.

Advertisement

SQUIRREL_PLAYLIST_10207759

Advertisement

Secondly, the competition has heated up, or to be more exact, they’ve got cheaper. Hisense and TCL’s Mini LEDs can now be had for around the same price, if not less than, Amazon’s Direct LED TVs.

SQUIRREL_PLAYLIST_10208388

Advertisement

The less expensive Fire TVs are no longer the value-led proposition they were a few years ago. And by undercutting Amazon’s own QLED and Mini LED models, the more expensive Fire TVs could be in trouble too.

SQUIRREL_PLAYLIST_10208012

An aggressive expansion…

Hisense 65U7Q Pro TV lifestyleHisense 65U7Q Pro TV lifestyle
Image Credit (Trusted Reviews)

Hisense’s approach to the UK TV market has been a gradual one, offering value-focused TVs similar to Amazon’s Fire TVs while adding premium-priced TVs over time. It’s not interested in OLED (though it does offer an OLED model) as it sees no point in competing with LG and Samsung when the playing field is heavily weighted in their favour. Instead, it wants to make its mark with Mini LEDs.

Advertisement

TCL entered the UK market later than Hisense and realised it’s been playing catch-up. Its approach has rather unbalanced the market with aggressive pricing to gain market share – and it’s working. From bits of data I’ve seen here and there, its share of the market is on an upward trend whereas other, more established players have stagnated or reduced in the last few years.

Advertisement

Both have made the play for Mini LED, bringing sizeable brightness, wide-ranging colours and more precise backlighting for black levels and contrast down to a price that some other TV manufacturers might baulk at.

Right now you can get a Hisense 55-inch U7Q for £599, and a TCL 55-inch C6KS for £426. The 55-inch Fire TV 4-Series is down to £339, but you can see that there’s less room for manoeuvre with Mini LED prices coming down.

Amazon needs to refocus on performance

Amazon Fire TV 4-Series 2026 Final ReckoningAmazon Fire TV 4-Series 2026 Final Reckoning
Image Credit (Trusted Reviews)

I think overall that Amazon’s Fire TVs can be considered a solid proposition, but they do need to offer better performance.

Advertisement

The focus has been on value but with a TCL Mini LED hitting nearly 1000 nits of brightness against a budget Fire TV 4-Series that can only do 350 nits, there’s a chasm and or it’s only going to grow bigger over subsequent years. Amazon needs to pull its finger out.

Advertisement

Amazon was the brand that was undercutting the likes of Sony, Panasonic and LG but that’s now changed with the rise of the Chinese brands. Moreover, the best Fire TVs are no longer made by Amazon but buy its partners.

Fire TVs made by JVC were the epitome of bang average, while the likes of Toshiba offered an even cheaper alternative, but Panasonic made better-performing Fire TVs. As well as there being the risk from TCL and Hisense on the pricing side, there’s a risk that Amazon’s TVs get left behind by other brands. Imagine a world where Amazon’s TVs weren’t the best value or best performing. And would you buy one if they didn’t fulfil either promise?

I don’t doubt that they’re not selling well at the moment, so this acts as more of warning, but Amazon’s Fire TVs need a revamp, especially from a performance perspective, because right now it feels as if its TVs are retreading old ground rather than moving forward.

The playing field has altered quite significantly in the last few years and as I wrote in my review for the Fire TV 4-Series, if you’re standing still and others are moving past you, then you might as well be going backwards.

Advertisement

Advertisement

Source link

Continue Reading

Tech

OpenAI says Elon Musk is orchestrating a last-minute ‘legal ambush’ before trial

Published

on

The feud between Elon Musk and OpenAI is getting even more contentious as the two sides get ready for trial later this month. The latest development in the legal back-and-forth saw OpenAI accuse Elon Musk and his latest proposals as a “legal ambush,” as first reported by Bloomberg. OpenAI filed its response on Friday, which detailed that Musk was “sandbagging the defendants and injecting chaos into the proceedings, while trying to recast his public narrative about his lawsuit.”

The lawsuit dates back to 2024 when Elon Musk sued both OpenAI and Microsoft, accusing the AI giant of ditching its original mission of being a non-profit and instead converting into a for-profit business after receiving financial backing and forming a partnership with Microsoft. Prior to OpenAI’s latest filing, Musk amended his original complaint to instead award any damages received to OpenAI’s nonprofit arm instead. Musk’s amendment, which was filed earlier this month, also sought to oust Altman from his role as OpenAI’s CEO and board member. In OpenAI’s Friday filing, the AI company claimed that Musk’s last-minute changes were “legally improper and factually unsupported.”

There’s a lot at stake with this lawsuit since Musk is reportedly seeking anywhere between $79 billion and $134 billion in “wrongful gains.” With both OpenAI and Microsoft denying any wrongdoing, according to Bloomberg, the trial is still set to kick off on April 27.

Source link

Advertisement
Continue Reading

Tech

‘Euphoria’ Season 3: How to Watch the Premiere Episode

Published

on

It may be hard to believe that Euphoria’s last season wrapped up in 2022 (at least for me and my TikTok “For You” page, where I still see 4-year-old clips on the regular). The HBO drama will soon premiere its third and possibly final season.

Season 3 takes place five years after season 2 (see our finale recap here), well after high school. The new season once again stars Zendaya, Hunter Schafer, Jacob Elordi, Sydney Sweeney, Alexa Demie, Maude Apatow, Colman Domingo and Eric Dane. It adds new guest stars such as Sharon Stone, Rosalía, Danielle Deadwyler, Natasha Lyonne and Trisha Paytas. According to an official synopsis, season 3 sees “a group of childhood friends wrestle with the virtue of faith, the possibility of redemption and the problem of evil.”

Advertisement

While it’s swapped from HBO Max to Max and back to HBO Max again in the time it’s taken for Euphoria to return to TV, you’ll be able to tune into the HBO streaming service for new episodes each week. Here’s a release schedule for Euphoria season 3.

When to watch Euphoria season 3 on HBO Max

In the US? You can stream the Euphoria season 3 premiere on HBO Max on Sunday, April 12, at 9 p.m. ET (6 p.m. PT). It’ll also air on HBO at 9 p.m. ET and PT. Subsequent installments will debut on Sundays through May 31.

  • Episode 1, Ándale: April 12
  • Episode 2, America My Dream: April 19
  • Episode 3, The Ballad of Paladin: April 26
  • Episode 4, Kitty Likes to Dance: May 3
  • Episode 5, This Little Piggy: May 10
  • Episode 6, Stand Still and See: May 17
  • Episode 7, Rain or Shine: May 24
  • Episode 8, In God We Trust: May 31

HBO Max last increased its plan prices in October, raising the ad-supported tier to $11 per month, the ad-free Standard tier to $18.50 per month and the ad-free Premium tier to $23 per month.

Advertisement

Warner Bros. Discovery

You might be able to save money by paying upfront for 12 months of HBO Max, which costs less than paying month-by-month for a year. In addition to HBO Max’s standalone plans, you can bundle it with Disney Plus and Hulu, either with ads for all three services or without.

Source link

Advertisement
Continue Reading

Tech

The biopharma senior associate whose career was fuelled by FUEL

Published

on

Amgen’s Luke Sheppard discusses Ireland’s biopharma space and how his career trajectory was powered by graduate opportunities.

“I was always interested in science at school, especially biology and physics. The turning point came when I spent two summers working with a mechanical engineer on the construction of a biopharmaceutical facility,” said Luke Sheppard, a senior associate for syringe manufacturing at Amgen.

“Seeing the facility take shape helped me to connect what I was learning in the classroom with the industry in real life. That experience ignited my passion and led me to study biotechnology at DCU.” 

As part of his degree he completed an internship with Amgen during his undergraduate studies and moved on to Amgen’s FUEL graduate programme. He said, “Alongside this, I completed a master’s in pharma and biopharma engineering at UCC, which ties in closely with the work I do now.”

Advertisement
Can you describe Ireland’s biopharmaceutical space?

Ireland’s biopharmaceutical sector is dynamic and well-established. It is recognised as a centre of excellence for manufacturing. The sector is also highly connected, with a healthy sense of competition and a strong shared awareness of best practice. For anyone with a STEM background, it is an attractive industry because it offers real depth in the work as well as a wide range of potential career paths.

What is your day-to-day like if there is such a thing?

My role is quite diverse. My time is split between supporting and driving operations, contributing to projects and seeking solutions. Part of the day can involve reviewing data or meeting leadership to discuss strategy. Equally, I could be troubleshooting an issue on the production floor. The variety keeps things interesting. Collaboration is a big part of the job. You are constantly working with specialists and moving things forward together to achieve the same goal. 

What skills do you utilise in your role and are any unexpected?

Technical knowledge is extremely important, but the skill that matters most is the ability to work as part of a team and to support colleagues. Clear, concise communication, relationship‑building and dedication take centre stage. There will always be new systems to learn, processes to improve and tools to adopt, but real progress ultimately depends on how well you work with others and how quickly you can build trust. The stronger your working relationships, the easier it is to ask questions, gain input and work efficiently when challenges arise. In a manufacturing environment, strong relationships truly make the difference.

You moved through the ranks via the FUEL programme, how was the experience?

The Amgen FUEL programme was an incredible experience as it gave me exposure to the highest levels of the business early on in my career. I completed three rotations across process development, quality assurance and utilities engineering. Each rotation lasted eight to nine months. In a relatively short time, I had to integrate into new teams, build relationships fast and learn new processes to contribute to meaningful work. Rotations teach resilience and determination, as well as creating visibility for participants. I had the opportunity to present my work to senior sites and European leaders, which accelerated my learning and professional development. The programme has allowed me to gain a strong understanding of operations and an insight into decisive leadership on the issues that matter most to our industry.

Advertisement
How can mentorship and internship opportunities positively impact a young person’s career in the long-term?

Mentorships and internships can have a long-lasting, positive impact. An internship allows graduates to experience the pace, teamwork and problem-solving involved in a working environment, which is difficult to replicate in a classroom. It can also help you understand what type of work suits you best. Mentorship adds another dimension, providing early-stage professionals with a broader perspective of industry and career development. Mentors can offer guidance, challenge thinking, and help you to spot career development opportunities that you may otherwise overlook. Over time, this support can make a meaningful difference in shaping long‑term career direction.

What do you enjoy most about your role?

I thrive on continued commitment, resilience and integrity on the issues that matter most to my team. I enjoy the variety of problem-solving, teamwork and planning to ensure multiple priorities are being achieved. I have grown personally and professionally by advancing my technical and analytical capabilities. I have also significantly broadened my range of soft skills. 

Have you any predictions for how the biopharma space might evolve in 2026?

I expect regulation, automation and AI to shape the industry’s trajectory over the coming years. There is greater regulatory focus on reducing human interaction in manufacturing processes and tightening controls around unit operations. AI will play an increasingly central role, supporting research and process optimisation. By analysing real time data effectively, AI capabilities will identify anomalies and patterns, helping production line teams to work more efficiently.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Tech

Asus ROG Kithara review: Asus goes hi-fi with its audiophile headset

Published

on

Why you can trust TechRadar


We spend hours testing every product or service we review, so you can be sure you’re buying the best. Find out more about how we test.

Asus ROG Kithara: one-minute review

There are a number of gaming headsets available that support high-res audio, such as the SteelSeries Arctis Nova Elite, but the new Asus ROG Kithara is one of the first we’ve seen that really takes the plunge into the challenging waters of the specialist hi-fi market.

Advertisement

Source link

Continue Reading

Tech

Critical Marimo pre-auth RCE flaw now under active exploitation

Published

on

Marimo

Hackers started exploiting a critical vulnerability in the Marimo open-source reactive Python notebook platform just 10 hours after its public disclosure.

The flaw allows remote code execution without authentication in Marimo versions 0.20.4 and earlier. It tracked as CVE-2026-39987 and GitHub assessed it with a critical score of 9.3 out of 10.

According to researchers at cloud-security company Sysdig, attackers created an exploit from the information in the developer’s advisory and immediately started using it in attacks that exfiltrated sensitive information.

Wiz

Marimo is an open-source Python notebook environment, typically used by data scientists, ML/AI practitioners, researchers, and developers building data apps or dashboards. It is a fairly popular project, with 20,000 GitHub stars and 1,000 forks.

CVE-2026-39987 is caused by the WebSocket endpoint ‘/terminal/ws’ exposing an interactive terminal without proper authentication checks, allowing connections from any unauthenticated client.

Advertisement

This gives direct access to a full interactive shell, running with the same privileges as the Marimo process.

Marimo disclosed the flaw on April 8 and yesterday released version 0.23.0 to address it. The developers noted that the flaw affects users who deployed Marimo as an editable notebook, and those who expose Marimo to a shared network using –host 0.0.0.0 while in edit mode.

Exploitation in the wild

Within the first 12 hours after the vulnerability details were disclosed, 125 IP addresses began reconnaissance activity, according to Sysdig.

Less than 10 hours after the disclosure, the researchers observed the first exploitation attempt in a credential theft operation.

Advertisement

The attacker first validated the vulnerability by connecting to the /terminal/ws endpoint and executing a short scripted sequence to confirm remote command execution, disconnecting within seconds.

Shortly after, they reconnected and began manual reconnaissance, issuing basic commands such as pwd, whoami, and ls to understand the environment, followed by directory navigation attempts and checks for SSH-related locations.

Next, the attacker focused on credential harvesting, immediately targeting the .env file and extracting environment variables, including cloud credentials and application secrets. They then attempted to read additional files in the working directory and continued probing for SSH keys.

Stealing credentials
Stealing credentials
Source: Sysdig

The entire credential access phase was completed in less than three minutes, notes a Sysdig report this week.

Roughly an hour later, the attacker returned for a second exploitation session using the same exploit sequence.

Advertisement

The researchers say that behind the attack appears to be a “methodical operator” with a hands-on approach, rather than automated scripts, focusing on high-value objectives such as stealing .env credentials and SSH keys.

The attackers did not attempt to install persistence, deploy cryptominers, or backdoors, suggesting a quick, stealthy operation.

Marimo users are recommended to upgrade to version 0.23.0 immediately, monitor WebSocket connections to ‘/terminal/ws,’ restrict external access via a firewall, and rotate all exposed secrets.

If upgrading is not possible, an effective mitigation is to block or disable access to the ‘/terminal/ws’ endpoint entirely.

Advertisement

Automated pentesting proves the path exists. BAS proves whether your controls stop it. Most teams run one without the other.

This whitepaper maps six validation surfaces, shows where coverage ends, and provides practitioners with three diagnostic questions for any tool evaluation.

Source link

Continue Reading

Tech

Week in Review: Most popular stories on GeekWire for the week of April 5, 2026

Published

on

Get caught up on the latest technology and startup news from the past week. Here are the most popular stories on GeekWire for the week of April 5, 2026.

Sign up to receive these updates every Sunday in your inbox by subscribing to our GeekWire Weekly email newsletter.

Most popular stories on GeekWire

Source link

Continue Reading

Tech

You Asked: Sony’s big move has fans worried, plus anti-glare in a dark room

Published

on

On today’s episode of You Asked: Sony’s new Bravia partnership with TCL raises big questions about pricing, quality, and data privacy. We break down what it means, whether a new QD-OLED is coming this year, and how anti-glare screens really perform in a dark room.

Sony and the new Bravia Inc

@charltonium4083 asks: Here’s one concern that isn’t discussed in the video or any of the comments: Which country will have primary jurisdiction over the new Bravia inc? Will it be China (TCL), or Japan (Sony)? Back in 2020, Homeland Security discovered that TCL may be directly sponsored by the CCP and that the TVs have backdoors to allow data to be breached by the government (thus allowing it to spy on customers). This has also been a more problem with other companies like TikTok and DJI, although a bit more publicized with them to the point where the USA has repeatedly threatened to ban all DJI products. If TCL owns 51% of the new Bravia inc, particularly in the manufacturing and business side, does that mean that it also has all of the customers’ data, and that the CCP could have more ability to spy on customers through the new Bravia TVs going forward? I’d be far less concerned if the customer data was actually handled by Sony (under Japan’s jurisdiction).

OK, quite a loaded question there with some implicit bias, to say the least. But we’re going to get into all of it.

First, Bravia Inc will be located in Tokyo, Japan within Sony’s headquarters. So that’s where the business will be. Manufacturing is likely to take place where TCL has its larger facilities, like China, Mexico, and Vietnam. One of their biggest advantages is large-scale production facilities that keep efficiency high and prices low.

As for your spying concerns, you might be surprised to know that just last month, March 2026, a Texas judge dismissed a lawsuit from the Texas Attorney General accusing TCL of tracking user habits without consent and selling that data to advertisers. So while our internet privacy remains an ongoing concern, TCL and Sony probably shouldn’t be a major concern. Personally, I’m more concerned about Meta, Google, Amazon, and hundreds of phone apps that have more access than a smart TV.

Advertisement

Either way, be sure to practice safe internet use. Read the user agreements when you register. Understand where your data is going, who it can be sold to, and how to limit what is tracking you with VPNs, ad blockers, and other tools.

Manufacturing and pricing strategy

@theGovnr1 asks: To me, it seems the new products will have the Sony technology and design but be manufactured by TCL.

And that’s my take as well. I think the goal is for manufacturing to become less expensive. There are several outstanding Bravia-branded TVs on the market, and most would tell you their picture quality is best in class. But if I’m not mistaken, they fall behind Samsung, LG, TCL, and Hisense in overall sales, likely due to price. So if having TCL handle manufacturing lowers the price while maintaining the image processing technology that makes Sony what it is, that’s a win.

Time will tell, and until the day comes when we have a TCL-manufactured Bravia TV to test, there’s really not much anyone can do to change minds. Based on comments, many of you have clearly decided that this is not for the better and the Bravia brand is doomed. Hopefully, you’re wrong, because then we can all get Sony-level TVs for less.

Sony OLED lineup outlook

@1.doubleyou asks: Will there be a new QD-OLED TV from Sony this year?

I’m leaning toward no, for a couple of reasons. One, they’re pouring a ton of resources and marketing into the release of their True RGB Mini LED TV. And two, they’ve been staggering their big TV updates every other year.

Advertisement

In 2023, we got the A95L QD-OLED. In 2024, we got the Bravia 9, their flagship Mini LED TV. Then in 2025, the Bravia 8 Mark II became the successor to the A95L in the QD-OLED department. And this year, probably sooner than later, we’ll have more details on this True RGB TV that will take over the flagship Mini LED role from the Bravia 9.

Not to mention, with the TCL merger, there may need to be some adjustments in how Sony’s OLEDs are manufactured before we get a new one.

Do anti-glare TVs fail in dark rooms?

@CoolVibe-w5f has a Samsung question in reference to their anti-glare screens, asking: How do the blacks look in a dark room compared to a glossy screen? From what I’ve read, the blacks are not quite 100 percent, especially next to a glossy screen.

A wise person once said: You can’t believe everything you read on the internet. What I’ve seen, take it or leave it, is very little to no difference in a dark room. If the only light being emitted in the room is coming from the TV, you will see pure black. I’m confident in that, and clearly Samsung is as well as they continue to expand that anti-glare panel into more TVs.

This year, it’s in the S95H as well as the S90H. Previous S90 models still had the glossy screen. The anti-glare panel is featured in several Mini LED TVs as well.

Advertisement

I don’t think they’d keep going all in on the technology if they weren’t sure it was delivering a viewing experience on par with the best from Sony and LG. We did a video a while ago putting the Samsung S95D next to LG’s flagship OLED in a dark room to show the difference. And I’ve seen others put their 2025 models, the S95F and S90F, side by side, and it’s very difficult to see a difference, if you can see one at all.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025