Connect with us
DAPA Banner

Tech

OpenAI says Elon Musk is orchestrating a last-minute ‘legal ambush’ before trial

Published

on

The feud between Elon Musk and OpenAI is getting even more contentious as the two sides get ready for trial later this month. The latest development in the legal back-and-forth saw OpenAI accuse Elon Musk and his latest proposals as a “legal ambush,” as first reported by Bloomberg. OpenAI filed its response on Friday, which detailed that Musk was “sandbagging the defendants and injecting chaos into the proceedings, while trying to recast his public narrative about his lawsuit.”

The lawsuit dates back to 2024 when Elon Musk sued both OpenAI and Microsoft, accusing the AI giant of ditching its original mission of being a non-profit and instead converting into a for-profit business after receiving financial backing and forming a partnership with Microsoft. Prior to OpenAI’s latest filing, Musk amended his original complaint to instead award any damages received to OpenAI’s nonprofit arm instead. Musk’s amendment, which was filed earlier this month, also sought to oust Altman from his role as OpenAI’s CEO and board member. In OpenAI’s Friday filing, the AI company claimed that Musk’s last-minute changes were “legally improper and factually unsupported.”

There’s a lot at stake with this lawsuit since Musk is reportedly seeking anywhere between $79 billion and $134 billion in “wrongful gains.” With both OpenAI and Microsoft denying any wrongdoing, according to Bloomberg, the trial is still set to kick off on April 27.

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

New FCC router rules could trap millions using outdated ISP hardware as supply chain limits stall upgrades and complicate security fixes

Published

on


  • FCC rules block new foreign routers while old, vulnerable ones stay in homes longer
  • ISP customers cannot upgrade routers even when security risks become widely known
  • Router approvals now depend on waivers that may slow down nationwide replacements

The Federal Communications Commission (FCC) has issued new rules intended to address security risks posed by routers produced outside the United States.

A number of recent incidents have shown foreign routers are vulnerable to cyberattacks, with campaigns like Flax, Volt, and Salt Typhoon making headlines across the world.

Source link

Continue Reading

Tech

Your developers are already running AI locally: Why on-device inference is the CISO’s new blind spot

Published

on

For the last 18 months, the CISO playbook for generative AI has been relatively simple: Control the browser.

Security teams tightened cloud access security broker (CASB) policies, blocked or monitored traffic to well-known AI endpoints, and routed usage through sanctioned gateways. The operating model was clear: If sensitive data leaves the network for an external API call, we can observe it, log it, and stop it. But that model is starting to break.

A quiet hardware shift is pushing large language model (LLM) usage off the network and onto the endpoint. Call it Shadow AI 2.0, or the “bring your own model” (BYOM) era: Employees running capable models locally on laptops, offline, with no API calls and no obvious network signature. The governance conversation is still framed as “data exfiltration to the cloud,” but the more immediate enterprise risk is increasingly “unvetted inference inside the device.”

When inference happens locally, traditional data loss prevention (DLP) doesn’t see the interaction. And when security can’t see it, it can’t manage it.

Advertisement

Why local inference is suddenly practical

Two years ago, running a useful LLM on a work laptop was a niche stunt. Today, it’s routine for technical teams.

Three things converged:

  • Consumer-grade accelerators got serious: A MacBook Pro with 64GB unified memory can often run quantized 70B-class models at usable speeds (with practical limits on context length). What once required multi-GPU servers is now feasible on a high-end laptop for many real workflows.

  • Quantization went mainstream: It’s now easy to compress models into smaller, faster formats that fit within laptop memory often with acceptable quality tradeoffs for many tasks.

  • Distribution is frictionless: Open-weight models are a single command away, and the tooling ecosystem makes “download → run → chat” trivial.

The result: An engineer can pull down a multi‑GB model artifact, turn off Wi‑Fi, and run sensitive workflows locally, source code review, document summarization, drafting customer communications, even exploratory analysis over regulated datasets. No outbound packets, no proxy logs, no cloud audit trail.

From a network-security perspective, that activity can look indistinguishable from “nothing happened”.

Advertisement

The risk isn’t only data leaving the company anymore

If the data isn’t leaving the laptop, why should a CISO care?

Because the dominant risks shift from exfiltration to integrity, provenance, and compliance. In practice, local inference creates three classes of blind spots that most enterprises have not operationalized.

1. Code and decision contamination (integrity risk)

Local models are often adopted because they’re fast, private, and “no approval required.” The downside is that they’re frequently unvetted for the enterprise environment.

A common scenario: A senior developer downloads a community-tuned coding model because it benchmarks well. They paste in internal auth logic, payment flows, or infrastructure scripts to “clean it up.” The model returns output that looks competent, compiles, and passes unit tests, but subtly degrades security posture (weak input validation, unsafe defaults, brittle concurrency changes, dependency choices that aren’t allowed internally). The engineer commits the change.

Advertisement

If that interaction happened offline, you may have no record that AI influenced the code path at all. And when you later do incident response, you’ll be investigating the symptom (a vulnerability) without visibility into a key cause (uncontrolled model usage).

2. Licensing and IP exposure (compliance risk)

Many high-performing models ship with licenses that include restrictions on commercial use, attribution requirements, field-of-use limits, or obligations that can be incompatible with proprietary product development. When employees run models locally, that usage can bypass the organization’s normal procurement and legal review process.

If a team uses a non-commercial model to generate production code, documentation, or product behavior, the company can inherit risk that shows up later during M&A diligence, customer security reviews, or litigation. The hard part is not just the license terms, it’s the lack of inventory and traceability. Without a governed model hub or usage record, you may not be able to prove what was used where.

3. Model supply chain exposure (provenance risk)

Local inference also changes the software supply chain problem. Endpoints begin accumulating large model artifacts and the toolchains around them: ownloaders, converters, runtimes, plugins, UI shells, and Python packages.

Advertisement

There is a critical technical nuance here: The file format matters. While newer formats like Safetensors are designed to prevent arbitrary code execution, older Pickle-based PyTorch files can execute malicious payloads simply when loaded. If your developers are grabbing unvetted checkpoints from Hugging Face or other repositories, they aren’t just downloading data — they could be downloading an exploit.

Security teams have spent decades learning to treat unknown executables as hostile. BYOM requires extending that mindset to model artifacts and the surrounding runtime stack. The biggest organizational gap today is that most companies have no equivalent of a software bill of materials for models: Provenance, hashes, allowed sources, scanning, and lifecycle management.

Mitigating BYOM: treat model weights like software artifacts

You can’t solve local inference by blocking URLs. You need endpoint-aware controls and a developer experience that makes the safe path the easy path.

Here are three practical ways:

Advertisement

1. Move governance down to the endpoint

Network DLP and CASB still matter for cloud usage, but they’re not sufficient for BYOM. Start treating local model usage as an endpoint governance problem by looking for specific signals:

  • Inventory and detection: Scan for high-fidelity indicators like .gguf files larger than 2GB, processes like llama.cpp or Ollama, and local listeners on common default port 11434.

  • Process and runtime awareness: Monitor for repeated high GPU/NPU (neural processing unit) utilization from unapproved runtimes or unknown local inference servers.

  • Device policy: Use mobile device management (MDM) and endpoint detection and response (EDR) policies to control installation of unapproved runtimes and enforce baseline hardening on engineering devices. The point isn’t to punish experimentation. It’s to regain visibility.

2. Provide a paved road: An internal, curated model hub

Shadow AI is often an outcome of friction. Approved tools are too restrictive, too generic, or too slow to approve. A better approach is to offer a curated internal catalog that includes:

Advertisement
  • Approved models for common tasks (coding, summarization, classification)

  • Verified licenses and usage guidance

  • Pinned versions with hashes (prioritizing safer formats like Safetensors)

  • Clear documentation for safe local usage, including where sensitive data is and isn’t allowed. If you want developers to stop scavenging, give them something better.

3. Update policy language: “Cloud services” isn’t enough anymore

Most acceptable use policies talk about SaaS and cloud tools. BYOM requires policy that explicitly covers:

  • Downloading and running model artifacts on corporate endpoints

  • Acceptable sources

  • License compliance requirements

  • Rules for using models with sensitive data

  • Retention and logging expectations for local inference tools This doesn’t need to be heavy-handed. It needs to be unambiguous.

The perimeter is shifting back to the device

For a decade we moved security controls “up” into the cloud. Local inference is pulling a meaningful slice of AI activity back “down” to the endpoint.

5 signals shadow AI has moved to endpoints:

Advertisement
  • Large model artifacts: Unexplained storage consumption by .gguf or .pt files.

  • Local inference servers: Processes listening on ports like 11434 (Ollama).

  • GPU utilization patterns: Spikes in GPU usage while offline or disconnected from VPN.

  • Lack of model inventory: Inability to map code outputs to specific model versions.

  • License ambiguity: Presence of “non-commercial” model weights in production builds.

Shadow AI 2.0 isn’t a hypothetical future, it’s a predictable consequence of fast hardware, easy distribution, and developer demand. CISOs who focus only on network controls will miss what’s happening on the silicon sitting right on employees’ desks.

The next phase of AI governance is less about blocking websites and more about controlling artifacts, provenance, and policy at the endpoint, without killing productivity.

Jayachander Reddy Kandakatla is a senior MLOps engineer.

Welcome to the VentureBeat community!

Advertisement

Our guest posting program is where technical experts share insights and provide neutral, non-vested deep dives on AI, data infrastructure, cybersecurity and other cutting-edge technologies shaping the future of enterprise.

Read more from our guest post program — and check out our guidelines if you’re interested in contributing an article of your own!

Source link

Advertisement
Continue Reading

Tech

Five signs data drift is already undermining your security models

Published

on

Data drift happens when the statistical properties of a machine learning (ML) model’s input data change over time, eventually rendering its predictions less accurate. Cybersecurity professionals who rely on ML for tasks like malware detection and network threat analysis find that undetected data drift can create vulnerabilities. A model trained on old attack patterns may fail to see today’s sophisticated threats. Recognizing the early signs of data drift is the first step in maintaining reliable and efficient security systems.

Why data drift compromises security models

ML models are trained on a snapshot of historical data. When live data no longer resembles this snapshot, the model’s performance dwindles, creating a critical cybersecurity risk. A threat detection model may generate more false negatives by missing real breaches or create more false positives, leading to alert fatigue for security teams.

Adversaries actively exploit this weakness. In 2024, attackers used echo-spoofing techniques to bypass email protection services. By exploiting misconfigurations in the system, they sent millions of spoofed emails that evaded the vendor’s ML classifiers. This incident demonstrates how threat actors can manipulate input data to exploit blind spots. When a security model fails to adapt to shifting tactics, it becomes a liability.

5 indicators of data drift

Security professionals can recognize the presence of drift (or its potential) in several ways.

Advertisement

1. A sudden drop in model performance

Accuracy, precision, and recall are often the first casualties. A consistent decline in these key metrics is a red flag that the model is no longer in sync with the current threat landscape.

Consider Klarna’s success: Its AI assistant handled 2.3 million customer service conversations in its first month and performed work equivalent to 700 agents. This efficiency drove a 25% decline in repeat inquiries and reduced resolution times to under two minutes.

Now imagine if those parameters suddenly reversed because of drift. In a security context, a similar drop in performance does not just mean unhappy clients — it also means successful intrusions and potential data exfiltration.

2. Shifts in statistical distributions

Security teams should monitor the core statistical properties of input features, such as the mean, median, and standard deviation. A significant change in these metrics from training data could indicate the underlying data has changed.

Advertisement

Monitoring for such shifts enables teams to catch drift before it causes a breach. For example, a phishing detection model might be trained on emails with an average attachment size of 2MB. If the average attachment size suddenly jumps to 10MB due to a new malware-delivery method, the model may fail to classify these emails correctly.

3. Changes in prediction behavior

Even if overall accuracy seems stable, distributions of predictions might change, a phenomenon often referred to as prediction drift.

For instance, if a fraud detection model historically flagged 1% of transactions as suspicious but suddenly starts flagging 5% or 0.1%, either something has shifted or the nature of the input data has changed. It might indicate a new type of attack that confuses the model or a change in legitimate user behavior that the model was not trained to identify.

4. An increase in model uncertainty

For models that provide a confidence score or probability with their predictions, a general decrease in confidence can be a subtle sign of drift.

Advertisement

Recent studies highlight the value of uncertainty quantification in detecting adversarial attacks. If the model becomes less sure about its forecasts across the board, it is likely facing data it was not trained on. In a cybersecurity setting, this uncertainty is an early sign of potential model failure, suggesting the model is operating in unfamiliar ground and that its decisions might no longer be reliable.

5. Changes in feature relationships

The correlation between different input features can also change over time. In a network intrusion model, traffic volume and packet size might be highly linked during normal operations. If that correlation disappears, it can signal a change in network behavior that the model may not understand. A sudden feature decoupling could indicate a new tunneling tactic or a stealthy exfiltration attempt.

Approaches to detecting and mitigating data drift

Common detection methods include the Kolmogorov-Smirnov (KS) and the population stability index (PSI). These compare the distributions of live and training data to identify deviations. The KS test determines if two datasets differ significantly, while the PSI measures how much a variable’s distribution has shifted over time. 

The mitigation method of choice often depends on how the drift manifests, as distribution changes may occur suddenly. For example, customers’ buying behavior may change overnight with the launch of a new product or a promotion. In other cases, drift may occur gradually over a more extended period. That said, security teams must learn to adjust their monitoring cadence to capture both rapid spikes and slow burns. Mitigation will involve retraining the model on more recent data to reclaim its effectiveness.

Advertisement

Proactively manage drift for stronger security

Data drift is an inevitable reality, and cybersecurity teams can maintain a strong security posture by treating detection as a continuous and automated process. Proactive monitoring and model retraining are fundamental practices to ensure ML systems remain reliable allies against developing threats.

Zac Amos is the Features Editor at ReHack.

Welcome to the VentureBeat community!

Our guest posting program is where technical experts share insights and provide neutral, non-vested deep dives on AI, data infrastructure, cybersecurity and other cutting-edge technologies shaping the future of enterprise.

Advertisement

Read more from our guest post program — and check out our guidelines if you’re interested in contributing an article of your own!

Source link

Continue Reading

Tech

ESPN on Disney Plus Is Expanding to More Countries

Published

on

More people will be able to watch ESPN programming through Disney Plus with Tuesday’s launch of ESPN on Disney Plus in Europe and select Asia-Pacific markets. 

With expansion into more than 50 countries and territories in those regions, people in 100 markets worldwide can now stream ESPN content through Disney Plus, according to a Disney Plus news release. The offering brings live sporting events and studio shows together with general entertainment and family programming in a single app.

In markets including Japan, Korea, Singapore, Taiwan and Hong Kong, a curated selection of English‑language ESPN sports programming is now available on Disney Plus, according to the release. Disney Plus also said, “the initial [ESPN on Disney Plus] offering will vary by market but will grow to thousands of live events over the next year.” 

Advertisement

Programming includes US coverage of the NBA and NHL starting with the 2026-27 season, college sports and more live events. Disney Plus subscribers can watch ESPN’s 30 for 30 documentary collection and select studio shows.

Pre-existing sports content on Disney Plus in Europe includes the UEFA Women’s Champions League, La Liga in the UK and Ireland and the Copa del Rey, UEFA Europa League, UEFA Conference League and DFB Pokal in the Nordic countries, according to Disney Plus.

Watch this: Your Phone is Disgusting: Let’s Fix That

People in Europe and select Asia-Pacific markets just need a Disney Plus subscription to watch ESPN content on Disney Plus. In the US, Disney Plus standalone subscribers can access a curated selection of live sports events, studio shows, and ESPN films, but must subscribe to Disney Plus and ESPN Unlimited to watch all available ESPN programming on the platform.

Advertisement

The ESPN on Disney Plus offering is also available to people in Latin America, the Caribbean, Australia and New Zealand.

Source link

Advertisement
Continue Reading

Tech

Amazon’s Fire TVs risk being left in the doldrums by Hisense and TCL’s Mini LEDs

Published

on

I’ve reviewed a few Amazon Fire TV Series models over the last few years, and generally, I’ve found them to be solid enough TVs.

I’ve always had the suspicion that they could be better for picture quality, and certainly a little less expensive, but then when Amazon’s sales event comes around, the TVs fall to prices that are verging on impulse buy if you want a cheap TV.

I don’t think you could say the same about Amazon’s TVs now.

Having reviewed the newest Fire TV 4-Series, I found it underwhelming. The problems were multiple. For one, it didn’t seem to be a big enough upgrade on the previous generation, at least from a performance perspective.

Advertisement

SQUIRREL_PLAYLIST_10207759

Advertisement

Secondly, the competition has heated up, or to be more exact, they’ve got cheaper. Hisense and TCL’s Mini LEDs can now be had for around the same price, if not less than, Amazon’s Direct LED TVs.

SQUIRREL_PLAYLIST_10208388

Advertisement

The less expensive Fire TVs are no longer the value-led proposition they were a few years ago. And by undercutting Amazon’s own QLED and Mini LED models, the more expensive Fire TVs could be in trouble too.

SQUIRREL_PLAYLIST_10208012

An aggressive expansion…

Hisense 65U7Q Pro TV lifestyleHisense 65U7Q Pro TV lifestyle
Image Credit (Trusted Reviews)

Hisense’s approach to the UK TV market has been a gradual one, offering value-focused TVs similar to Amazon’s Fire TVs while adding premium-priced TVs over time. It’s not interested in OLED (though it does offer an OLED model) as it sees no point in competing with LG and Samsung when the playing field is heavily weighted in their favour. Instead, it wants to make its mark with Mini LEDs.

Advertisement

TCL entered the UK market later than Hisense and realised it’s been playing catch-up. Its approach has rather unbalanced the market with aggressive pricing to gain market share – and it’s working. From bits of data I’ve seen here and there, its share of the market is on an upward trend whereas other, more established players have stagnated or reduced in the last few years.

Advertisement

Both have made the play for Mini LED, bringing sizeable brightness, wide-ranging colours and more precise backlighting for black levels and contrast down to a price that some other TV manufacturers might baulk at.

Right now you can get a Hisense 55-inch U7Q for £599, and a TCL 55-inch C6KS for £426. The 55-inch Fire TV 4-Series is down to £339, but you can see that there’s less room for manoeuvre with Mini LED prices coming down.

Amazon needs to refocus on performance

Amazon Fire TV 4-Series 2026 Final ReckoningAmazon Fire TV 4-Series 2026 Final Reckoning
Image Credit (Trusted Reviews)

I think overall that Amazon’s Fire TVs can be considered a solid proposition, but they do need to offer better performance.

Advertisement

The focus has been on value but with a TCL Mini LED hitting nearly 1000 nits of brightness against a budget Fire TV 4-Series that can only do 350 nits, there’s a chasm and or it’s only going to grow bigger over subsequent years. Amazon needs to pull its finger out.

Advertisement

Amazon was the brand that was undercutting the likes of Sony, Panasonic and LG but that’s now changed with the rise of the Chinese brands. Moreover, the best Fire TVs are no longer made by Amazon but buy its partners.

Fire TVs made by JVC were the epitome of bang average, while the likes of Toshiba offered an even cheaper alternative, but Panasonic made better-performing Fire TVs. As well as there being the risk from TCL and Hisense on the pricing side, there’s a risk that Amazon’s TVs get left behind by other brands. Imagine a world where Amazon’s TVs weren’t the best value or best performing. And would you buy one if they didn’t fulfil either promise?

I don’t doubt that they’re not selling well at the moment, so this acts as more of warning, but Amazon’s Fire TVs need a revamp, especially from a performance perspective, because right now it feels as if its TVs are retreading old ground rather than moving forward.

The playing field has altered quite significantly in the last few years and as I wrote in my review for the Fire TV 4-Series, if you’re standing still and others are moving past you, then you might as well be going backwards.

Advertisement

Advertisement

Source link

Continue Reading

Tech

‘Euphoria’ Season 3: How to Watch the Premiere Episode

Published

on

It may be hard to believe that Euphoria’s last season wrapped up in 2022 (at least for me and my TikTok “For You” page, where I still see 4-year-old clips on the regular). The HBO drama will soon premiere its third and possibly final season.

Season 3 takes place five years after season 2 (see our finale recap here), well after high school. The new season once again stars Zendaya, Hunter Schafer, Jacob Elordi, Sydney Sweeney, Alexa Demie, Maude Apatow, Colman Domingo and Eric Dane. It adds new guest stars such as Sharon Stone, Rosalía, Danielle Deadwyler, Natasha Lyonne and Trisha Paytas. According to an official synopsis, season 3 sees “a group of childhood friends wrestle with the virtue of faith, the possibility of redemption and the problem of evil.”

Advertisement

While it’s swapped from HBO Max to Max and back to HBO Max again in the time it’s taken for Euphoria to return to TV, you’ll be able to tune into the HBO streaming service for new episodes each week. Here’s a release schedule for Euphoria season 3.

When to watch Euphoria season 3 on HBO Max

In the US? You can stream the Euphoria season 3 premiere on HBO Max on Sunday, April 12, at 9 p.m. ET (6 p.m. PT). It’ll also air on HBO at 9 p.m. ET and PT. Subsequent installments will debut on Sundays through May 31.

  • Episode 1, Ándale: April 12
  • Episode 2, America My Dream: April 19
  • Episode 3, The Ballad of Paladin: April 26
  • Episode 4, Kitty Likes to Dance: May 3
  • Episode 5, This Little Piggy: May 10
  • Episode 6, Stand Still and See: May 17
  • Episode 7, Rain or Shine: May 24
  • Episode 8, In God We Trust: May 31

HBO Max last increased its plan prices in October, raising the ad-supported tier to $11 per month, the ad-free Standard tier to $18.50 per month and the ad-free Premium tier to $23 per month.

Advertisement

Warner Bros. Discovery

You might be able to save money by paying upfront for 12 months of HBO Max, which costs less than paying month-by-month for a year. In addition to HBO Max’s standalone plans, you can bundle it with Disney Plus and Hulu, either with ads for all three services or without.

Source link

Advertisement
Continue Reading

Tech

The biopharma senior associate whose career was fuelled by FUEL

Published

on

Amgen’s Luke Sheppard discusses Ireland’s biopharma space and how his career trajectory was powered by graduate opportunities.

“I was always interested in science at school, especially biology and physics. The turning point came when I spent two summers working with a mechanical engineer on the construction of a biopharmaceutical facility,” said Luke Sheppard, a senior associate for syringe manufacturing at Amgen.

“Seeing the facility take shape helped me to connect what I was learning in the classroom with the industry in real life. That experience ignited my passion and led me to study biotechnology at DCU.” 

As part of his degree he completed an internship with Amgen during his undergraduate studies and moved on to Amgen’s FUEL graduate programme. He said, “Alongside this, I completed a master’s in pharma and biopharma engineering at UCC, which ties in closely with the work I do now.”

Advertisement
Can you describe Ireland’s biopharmaceutical space?

Ireland’s biopharmaceutical sector is dynamic and well-established. It is recognised as a centre of excellence for manufacturing. The sector is also highly connected, with a healthy sense of competition and a strong shared awareness of best practice. For anyone with a STEM background, it is an attractive industry because it offers real depth in the work as well as a wide range of potential career paths.

What is your day-to-day like if there is such a thing?

My role is quite diverse. My time is split between supporting and driving operations, contributing to projects and seeking solutions. Part of the day can involve reviewing data or meeting leadership to discuss strategy. Equally, I could be troubleshooting an issue on the production floor. The variety keeps things interesting. Collaboration is a big part of the job. You are constantly working with specialists and moving things forward together to achieve the same goal. 

What skills do you utilise in your role and are any unexpected?

Technical knowledge is extremely important, but the skill that matters most is the ability to work as part of a team and to support colleagues. Clear, concise communication, relationship‑building and dedication take centre stage. There will always be new systems to learn, processes to improve and tools to adopt, but real progress ultimately depends on how well you work with others and how quickly you can build trust. The stronger your working relationships, the easier it is to ask questions, gain input and work efficiently when challenges arise. In a manufacturing environment, strong relationships truly make the difference.

You moved through the ranks via the FUEL programme, how was the experience?

The Amgen FUEL programme was an incredible experience as it gave me exposure to the highest levels of the business early on in my career. I completed three rotations across process development, quality assurance and utilities engineering. Each rotation lasted eight to nine months. In a relatively short time, I had to integrate into new teams, build relationships fast and learn new processes to contribute to meaningful work. Rotations teach resilience and determination, as well as creating visibility for participants. I had the opportunity to present my work to senior sites and European leaders, which accelerated my learning and professional development. The programme has allowed me to gain a strong understanding of operations and an insight into decisive leadership on the issues that matter most to our industry.

Advertisement
How can mentorship and internship opportunities positively impact a young person’s career in the long-term?

Mentorships and internships can have a long-lasting, positive impact. An internship allows graduates to experience the pace, teamwork and problem-solving involved in a working environment, which is difficult to replicate in a classroom. It can also help you understand what type of work suits you best. Mentorship adds another dimension, providing early-stage professionals with a broader perspective of industry and career development. Mentors can offer guidance, challenge thinking, and help you to spot career development opportunities that you may otherwise overlook. Over time, this support can make a meaningful difference in shaping long‑term career direction.

What do you enjoy most about your role?

I thrive on continued commitment, resilience and integrity on the issues that matter most to my team. I enjoy the variety of problem-solving, teamwork and planning to ensure multiple priorities are being achieved. I have grown personally and professionally by advancing my technical and analytical capabilities. I have also significantly broadened my range of soft skills. 

Have you any predictions for how the biopharma space might evolve in 2026?

I expect regulation, automation and AI to shape the industry’s trajectory over the coming years. There is greater regulatory focus on reducing human interaction in manufacturing processes and tightening controls around unit operations. AI will play an increasingly central role, supporting research and process optimisation. By analysing real time data effectively, AI capabilities will identify anomalies and patterns, helping production line teams to work more efficiently.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Tech

Asus ROG Kithara review: Asus goes hi-fi with its audiophile headset

Published

on

Why you can trust TechRadar


We spend hours testing every product or service we review, so you can be sure you’re buying the best. Find out more about how we test.

Asus ROG Kithara: one-minute review

There are a number of gaming headsets available that support high-res audio, such as the SteelSeries Arctis Nova Elite, but the new Asus ROG Kithara is one of the first we’ve seen that really takes the plunge into the challenging waters of the specialist hi-fi market.

Advertisement

Source link

Continue Reading

Tech

Critical Marimo pre-auth RCE flaw now under active exploitation

Published

on

Marimo

Hackers started exploiting a critical vulnerability in the Marimo open-source reactive Python notebook platform just 10 hours after its public disclosure.

The flaw allows remote code execution without authentication in Marimo versions 0.20.4 and earlier. It tracked as CVE-2026-39987 and GitHub assessed it with a critical score of 9.3 out of 10.

According to researchers at cloud-security company Sysdig, attackers created an exploit from the information in the developer’s advisory and immediately started using it in attacks that exfiltrated sensitive information.

Wiz

Marimo is an open-source Python notebook environment, typically used by data scientists, ML/AI practitioners, researchers, and developers building data apps or dashboards. It is a fairly popular project, with 20,000 GitHub stars and 1,000 forks.

CVE-2026-39987 is caused by the WebSocket endpoint ‘/terminal/ws’ exposing an interactive terminal without proper authentication checks, allowing connections from any unauthenticated client.

Advertisement

This gives direct access to a full interactive shell, running with the same privileges as the Marimo process.

Marimo disclosed the flaw on April 8 and yesterday released version 0.23.0 to address it. The developers noted that the flaw affects users who deployed Marimo as an editable notebook, and those who expose Marimo to a shared network using –host 0.0.0.0 while in edit mode.

Exploitation in the wild

Within the first 12 hours after the vulnerability details were disclosed, 125 IP addresses began reconnaissance activity, according to Sysdig.

Less than 10 hours after the disclosure, the researchers observed the first exploitation attempt in a credential theft operation.

Advertisement

The attacker first validated the vulnerability by connecting to the /terminal/ws endpoint and executing a short scripted sequence to confirm remote command execution, disconnecting within seconds.

Shortly after, they reconnected and began manual reconnaissance, issuing basic commands such as pwd, whoami, and ls to understand the environment, followed by directory navigation attempts and checks for SSH-related locations.

Next, the attacker focused on credential harvesting, immediately targeting the .env file and extracting environment variables, including cloud credentials and application secrets. They then attempted to read additional files in the working directory and continued probing for SSH keys.

Stealing credentials
Stealing credentials
Source: Sysdig

The entire credential access phase was completed in less than three minutes, notes a Sysdig report this week.

Roughly an hour later, the attacker returned for a second exploitation session using the same exploit sequence.

Advertisement

The researchers say that behind the attack appears to be a “methodical operator” with a hands-on approach, rather than automated scripts, focusing on high-value objectives such as stealing .env credentials and SSH keys.

The attackers did not attempt to install persistence, deploy cryptominers, or backdoors, suggesting a quick, stealthy operation.

Marimo users are recommended to upgrade to version 0.23.0 immediately, monitor WebSocket connections to ‘/terminal/ws,’ restrict external access via a firewall, and rotate all exposed secrets.

If upgrading is not possible, an effective mitigation is to block or disable access to the ‘/terminal/ws’ endpoint entirely.

Advertisement

Automated pentesting proves the path exists. BAS proves whether your controls stop it. Most teams run one without the other.

This whitepaper maps six validation surfaces, shows where coverage ends, and provides practitioners with three diagnostic questions for any tool evaluation.

Source link

Continue Reading

Tech

Week in Review: Most popular stories on GeekWire for the week of April 5, 2026

Published

on

Get caught up on the latest technology and startup news from the past week. Here are the most popular stories on GeekWire for the week of April 5, 2026.

Sign up to receive these updates every Sunday in your inbox by subscribing to our GeekWire Weekly email newsletter.

Most popular stories on GeekWire

Source link

Continue Reading

Trending

Copyright © 2025