Connect with us

Technology

How open-source LLMs are disrupting cybersecurity at scale

Published

on

How open-source LLMs are disrupting cybersecurity at scale

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Open-source large language models (LLMs) continue to revolutionize the cybersecurity landscape, serving as a strong catalyst for increasing innovation and enabling startups and established vendors alike to accelerate time-to-market.

From new generative AI applications to advanced security tools, these models are proving the foundation of the future of gen AI-based cybersecurity. Open-source models gaining traction in cybersecurity include Meta’s LLaMA 2. LLaMA 3.2, Technology Innovation Institute’s Falcon, Stability AI’s StableLM, and those hosted by Hugging Face, including BigScience’s BLOOM. All of these models are seeing growing adoption and use, thanks in large part to their greater cost-effectiveness, flexibility and transparency.

Cybersecurity software providers are facing a growing set of challenges related to governance and licensing while enabling their platforms to scale in response to the fast-moving nature of open-source LLM development. Designing an architecture that can quickly adapt and capitalize on the latest features that most recent open-source LLMs are providing is challenging.

Advertisement

Itamar Sher, CEO and co-founder of Seal Security, recently sat down with VentureBeat (virtually) to discuss the foundational yet evolving role of open-source LLMs in their operations. “Open-source LLMs enable us to scale security patching for open-source components in ways that closed models cannot,” he said.

The ability to scale models quickly is critical for companies like Seal, which use open-source components to ensure the rapid deployment of patches across different environments. He added that “open-source LLMs give us access to a community that continuously improves models, offering a layer of intelligence and speed that wouldn’t be possible with proprietary systems.”

Open-source LLMs’ growing importance in cybersecurity

Cybersecurity vendors have long relied on making their apps, tools and platforms proprietary to lock customers into a given solution, especially in the areas of threat detection and mitigation. VentureBeat is hearing there’s an intense backlash against this strategy, however, which is further accelerating open source LLM’s popularity.

Gartner’s Hype Cycle for Open-Source Software 2024 reflects the rising prominence of open-source LLMs, placing them at the peak of inflated expectations. This placement reflects what VentureBeat is hearing about a surge in interest and adoption across the cybersecurity vendor landscape and within enterprises.

Advertisement

Credit: Gartner, Inc. (2024, August 8). Hype Cycle for Open-Source Software, 2024 (ID: G00811366). Gartner, Inc.

The Hype Cycle shows that the maturity of open-source LLMs is still emerging, with market penetration between 5% and 20%. The plateau for this technology is predicted to be reached within the next two to five years, emphasizing its rapid growth and growing dominance in cybersecurity.

VentureBeat is seeing more cybersecurity startups capitalize on open-source LLMs’ customization flexibility and scale in their platform, apps and tool strategies. A widespread use case is fine-tuning models to address domain-specific needs, from enhancing real-time threat detection to improving vulnerability management.

Sher said, “By integrating open-source LLMs, we can customize models for specific threats and use cases, which allows us to remain agile and responsive to evolving cybersecurity challenges.”

Comparing the advantages and challenges of open-source LLMs

Open-source LLMs bring several advantages to cybersecurity systems development and operations, including the following:

Advertisement

Customization, scale and flexibility: One of the main drivers for adopting open-source LLMs that’s proving popular with cybersecurity companies standardizing on them is the ability to modify the models for specific use cases quickly. Seal Security’s integration of LLMs into its security platforms, apps, tools and services offerings illustrates how companies can use these models to streamline patch management processes across open-source components. John Morello, CTO and co-founder of Gutsy told VentureBeat in a recent interview that the open-source nature of Google’s BERT open-source language model allows Gutsy to customize and train their model for specific security use cases while maintaining privacy and efficiency.

Community collaboration: Open-source LLMs benefit from the fast-growing base of developer communities pushing their boundaries and scaling daily to solve complex cybersecurity challenges. These communities are setting a fast pace when it comes to continuous innovation, enabling companies, developers and universities to research to benefit from shared insights and improvements. Seal Security, for example, has aligned itself with MITRE’s CVE Numbering Authority (CNA) to enhance collaboration around open-source vulnerabilities.

Reducing vendor lock-in: Open-source models offer enterprises a way to avoid vendor lock-in, giving them more control over costs and reducing dependency on proprietary systems. VentureBeat is seeing this issue become a pivotal one that is core to the future of cybersecurity, with flexibility being the goal. Responding to threats fast and having a consistent approach to deploying patches is vital to cybersecurity’s future.

However, these benefits are not without challenges. Gartner notes in their research that open-source LLMs often require significant infrastructure investments, which can create long-term operational challenges for companies that lack well-funded and staffed in-house IT and security teams.

Advertisement

The licensing complexities associated with open-source models can present legal and compliance risks as well. Sher explained that “open-source models give us transparency, but managing their life cycles and ensuring compliance is still a major concern.”

Open-source LLMs’ cybersecurity contributions are growing

VentureBeat is seeing cybersecurity providers adopting open-source LLMs as core to their platforms, gaining a competitive advantage with their improvements in threat detection and response. Seal Security has been able to leverage open-source models for real-time detection and vulnerability management by integrating them into their security patching systems. According to Sher, “Our infrastructure is designed to quickly switch between different LLMs, depending on the threat landscape, ensuring that we stay ahead of emerging vulnerabilities.”

Gartner predicts that small language models or edge LLMs will see greater adoption across domain-specific applications led by cybersecurity. Edge LLMs, by definition, are decentralized closer to the data they need to analyze, which allows for faster processing and real-time threat detection.

Edge LLMs are designed to require less computational power, making them more manageable and less costly to train, which are ideal for cybersecurity use cases that require real-time speed and accuracy. By being able to function at the edge, these LLMs can rapidly detect threats in environments where latency is critical, such as IoT devices or remote systems.

Advertisement

Protecting against software supply chain attacks

Despite the growing number of contributions open-source LLMs are making, they also come with risks. A significant concern is the rising number of software supply chain attacks. Gartner’s Hype Cycle for Open-Source Software 2024 notes that open-source components have increasingly become targets for state-sponsored attacks. The mean age of vulnerabilities in open-source codebases is approximately 2.8 years, making it vital for companies to implement and keep current their patch management and governance systems.

Seal Security’s recent designation as a CVE Numbering Authority (CNA) is essential for the provider to play a more crucial role in reducing the risks of supply chain attacks. The company can now identify, document, and assign vulnerabilities through the CVE Program, contributing to improving the security of open-source code across the industry. Their partnership with MITRE further enhances this capability, allowing Seal to share findings with the broader cybersecurity community.

As Sher emphasized this collaboration helps enhance security for everyone using open-source software, reinforcing the company’s commitment to the protection of the global software ecosystem.

Looking ahead

Open-source LLMs are redefining the cybersecurity landscape for the better by reducing legacy lock-in from proprietary technologies and platforms. VentureBeat is seeing how quickly these models are advancing in terms of accessibility, quality, and speed, making them a viable alternative to proprietary systems.

Advertisement

For companies like Seal Security, the future lies in continuously evolving their open-source LLM capabilities to stay ahead of the ever-changing threat landscape. “We’re constantly evaluating new models and infrastructures to ensure we can provide the best security solutions for our clients,” Sher concluded.


Source link
Continue Reading
Advertisement
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Technology

Intel says its Raptor Lake crashing chip nightmare is over

Published

on

Intel says its Raptor Lake crashing chip nightmare is over

Vmin Shift Instability Root Cause 

Intel® has localized the Vmin Shift Instability issue to a clock tree circuit within the IA core which is particularly vulnerable to reliability aging under elevated voltage and temperature. Intel has observed these conditions can lead to a duty cycle shift of the clocks and observed system instability.  

Intel® has identified four (4) operating scenarios that can lead to Vmin shift in affected processors: 

1) Motherboard power delivery settings exceeding Intel power guidance. 

Advertisement

a. Mitigation: Intel® Default Settings recommendations for Intel® Core™ 13th and 14th Gen desktop processors.  

2) eTVB Microcode algorithm which was allowing Intel® Core™ 13th and 14th Gen i9 desktop processors to operate at higher performance states even at high temperatures. 

a. Mitigation: microcode 0x125 (June 2024) addresses eTVB algorithm issue.  

3) Microcode SVID algorithm requesting high voltages at a frequency and duration which can cause Vmin shift. 

Advertisement

a. Mitigation: microcode 0x129 (August 2024) addresses high voltages requested by the processor.  

4) Microcode and BIOS code requesting elevated core voltages which can cause Vmin shift especially during periods of idle and/or light activity. 

a. Mitigation: Intel® is releasing microcode 0x12B, which encompasses 0x125 and 0x129 microcode updates, and addresses elevated voltage requests by the processor during idle and/or light activity periods.  

Source link

Advertisement

Continue Reading

Servers computers

42U Server Rack Cabinet – RK4242BK24 | StarTech.com

Published

on

42U Server Rack Cabinet - RK4242BK24 | StarTech.com



This 42U server rack cabinet provides security for storing standard 19″ rack-mount equipment. The rack is compatible with EIA-310 compliant equipment and supports a weight capacity of 3315 lb. (1500 kg).

To learn more visit StarTech.com

source

Continue Reading

Technology

NYT Strands today — hints, answers and spangram for Saturday, October 5 (game #216)

Published

on

NYT Strands homescreen on a mobile phone screen, on a light blue background

Strands is the NYT’s latest word game after the likes of Wordle, Spelling Bee and Connections – and it’s great fun. It can be difficult, though, so read on for my Strands hints.

Want more word-based fun? Then check out my Wordle today, NYT Connections today and Quordle today pages for hints and answers for those games.

Source link

Continue Reading

Technology

Nest Learning Thermostat (4th Gen) vs. Nest Thermostat

Published

on

Nest Learning Thermostat (4th Gen) vs. Nest Thermostat
Four Nest Learning Thermostats in a line.
Google

The Nest Learning Thermostat (4th Gen) is the latest iteration of Google’s premium smart thermostat, offering a bold new design and a wealth of new functionality. It carries a hefty price tag — and depending on your needs, the more affordable Nest Thermostat might be the wiser option. But what exactly is the difference between the Nest Learning Thermostat (4th Gen) and Nest Thermostat?

From pricing and available colors to features and design, here’s everything you need to know before making a purchase. And be sure to read our full fourth-generation Nest Thermostat review.

Pricing and design

Adjusting the temperature on the Google Nest Thermostat.
Google

The Nest Thermostat is relatively affordable at $130, while the Nest Learning Thermostat (4th Gen) is over twice as expensive at $280. There’s no denying the Nest Learning Thermostat looks better, thanks to a revamped design that features a large borderless display augmented with a stainless steel ring. The Nest Thermostat is no slouch, but its mirrored glass lens and plastic housing definitely feel cheaper.

The Nest Learning Thermostat is available in polished silver, polished obsidian, and polished gold. The Nest Thermostat is available in snow, sand, fog, and charcoal.

If money is a factor, the Nest Thermostat is the easier choice. It may not look as flashy, but it’s shockingly more affordable than the latest Nest Learning Thermostat.

Winner: Nest Thermostat

Advertisement

Installation

A person operating the Google Nest thermostat.
Google

The installation process is largely the same for both devices. You’ll need to pop out your old thermostat, then reconnect the wires to your Nest Thermostat or Nest Learning Thermostat. Google says the Nest Thermostat “works with 85% of systems,” though some systems may require a C-wire. The Nest Learning Thermostat “works with most HVAC systems.” If you live in a home built within the last few decades, you shouldn’t have a problem getting either running.

Be sure to check out the useful Nest Thermostat Compatibility Tool at Google if you have questions about your exact system.

As for the actual installation, you’ll be guided through the process via a mobile app. This offers plenty of diagrams and visuals, so even DIY novices should be able to complete the task. If you run into trouble or if you need to install a C-wire, it may be worth contacting a professional.

Winner: Tie

Features and controls

The digital clock face on the Nest Thermostat.
The digital clock face on the Nest Thermostat. Phil Nickinson / Digital Trends

As its price would suggest, the Nest Learning Thermostat is packed with features. Its large display is customizable, allowing it to show information such as the temperature, time, weather forecast, outdoor air quality, and more. You can also turn and press on the dial to load additional details. The most compelling reason to pick up this smart thermostat is because it will learn your routine and create a schedule based on your preferences. It’ll also look for ways to save money on your energy bill and give you suggestions for your heating and cooling schedule.

The Nest Thermostat is basic by comparison. While you can still control it remotely via the mobile app, it won’t create automated schedules or recommend energy-saving ideas. Its screen is also much smaller and it doesn’t have access to information like air quality and the weather forecast.

Advertisement

Winner: Nest Learning Thermostat (4th Gen)

Smart home compatibility

The Nest Thermostat as seen in Apple's Home app.
The fourth-generation Nest Thermostat supports Matter, and that means it works natively in Apple’s HomeKit ecosystem. Phil Nickinson / Digital Trends

Just hearing “Nest Thermostat” immediately conjures up thoughts of smart home tech, right? After all, that’s the entire point — you’re able to control any Nest Thermostat from anywhere. That was true from the first Nest Thermostat years ago, and it’s true with this fourth generation.

But perhaps a little more important (if also a little bit on the nerdy side) is that we have to talk about a smart home standard called Matter. That’s a protocol that lets devices work with each other more easily. And it also lets devices from what previously were fairly closed ecosystems — think Google Home on one side, and Apple’s HomeKit on the other — play nicely.

Both the 2020 Nest Thermostat and the 2024 model support Matter. And that means either can work inside Apple’s HomeKit ecosystem — or with any other system that supports Matter.

Winner: This one’s a tie — and that’s a good thing.

Advertisement

Which is the better smart thermostat?

The fourth-generation Nest thermostat showing the current indoor temperature and the target temperature.
The fourth-generation Nest thermostat showing the current indoor temperature and the target temperature. Phil Nickinson / Digital Trends

The Nest Learning Thermostat (4th Gen) is without a doubt the better smart thermostat — but that doesn’t mean it’s automatically the best for your needs. While the extra features are great and can help you save money over the years, the hefty upfront investment won’t make sense for all families. But if you have the budget for it, this is the one to get.

For shoppers on a budget, the Nest Thermostat is a nice compromise. It’s missing out on a few powerful features, but it still brings smarts to your heating and cooling without breaking your budget. It’s also nice that you have four unique color options — so you can find a style that looks best in your home.






Source link

Continue Reading

Servers computers

12U Slide-Out Server Rack – Rotating – 4-Post Rack – PORACK12U | StarTech.com

Published

on

12U Slide-Out Server Rack - Rotating - 4-Post Rack - PORACK12U | StarTech.com



This 12U slide-out server rack helps you save space while keeping your rack-mount equipment organized and accessible. The compact open-frame rack is easy to pull out and rotate, which makes it easy to install and access your equipment or perform maintenance when needed.

The rack supports a total load capacity of 125 lb. (56.7 kg).

source

Continue Reading

Technology

Android is getting a Trade-In mode for quicker phone inspections

Published

on

Featured image for Android is getting a Trade-In mode for quicker phone inspections

Google is working on a new “Trade-In” mode for Android devices to simplify their inspections while keeping the device secure. It will rely on ADB or Android Debug Bridge, but the mode won’t allow unlimited and unrestricted access.

Why does the Trade-In process often take several days?

Google could soon deploy a new mode that would allow technicians quicker and simpler access to the device’s software. Technicians would be able to access the device with the press of a few buttons and perform diagnostics tests.

The trade-in process is rather simple for the end-user. Consumers have to merely perform a factory reset of their Android smartphones before handing them over.

Consumers, however, have to wait for a few days to find out if their Android smartphones qualify for the full trade-in value. This is because the backend process for completing a trade-in is complex.

Advertisement

Trade-ins take time because technicians have to perform several tests on the device. External damage is easier to observe. However, there are other inspections that usually involve booting the device and running a series of diagnostic tests. These tests determine if the internal hardware, including the screen, cameras, and other sensors are functioning properly.

Google testing ADB-based ‘Trade-In mode’ for Android smartphones

To perform tests on software and internal hardware, technicians have to go through the setup wizard like it’s a brand-new phone. Experts can quickly skip most screens. However there could be hundreds of devices in the queue, and repeating this process is not only cumbersome but also time-consuming.

Google is reportedly testing a new mode that would allow technicians to quickly reach the desired screen on the device. The “Trade-In mode”, would grant access to ADB on the first screen of the setup wizard. Thereafter technicians could run their automated diagnostics.

ADB is quite powerful and allows deep access to system apps and services. Hence, Google is restricting the Trade-In mode to a single ADB command before the device asks to complete the setup process.

Advertisement

Technicians can run ‘adb shell tradeinmode enter’. This would allow them to bypass the setup process and put Android into an “evaluation mode.” If activated, the Android OS will schedule a factory reset on the next boot.

Alternatively, technicians can run ‘adb shell tradeinmode getstatus [-challenge CHALLENGE]’ to retrieve diagnostic information about the device as well as see if it passes an attestation challenge. Needless to say, if properly deployed and used, the Trade-In mode could bring down the waiting period for the approval process of an old smartphone.

Source link

Continue Reading

Trending

Copyright © 2024 WordupNews.com