Endor Labs AI Model Discovery Provides Critical Security Screening for Users

AI Model Discovery tool enables users to set safeguards to protect enterprises within existing developer workflows.
Published: January 29, 2025

PALO ALTO, Calif.— Endor Labs, a developer of open source software security, has announced a brand new feature in its signature platform enabling organizations to discover the AI models already in use across their applications, and to set and enforce security policies over which AI models are permitted.

The Endor Labs AI Model Discovery directly addresses three critical use cases: It enables application security professionals to discover local open-source AI models being used in their application code; it evaluate risks from those models; and it enforces organization-wide policies related to AI model curation and usage. It even goes a step further with automated detection, warning developers about policy violations, and blocking high-risk models from entering production. This latest effort to help organizations govern AI code further cements Endor Labs as a leader in addressing emerging AI risks for application security programs.

“There’s currently a significant gap in the ability to use AI models safely—the traditional Software Composition Analysis (SCA) tools deployed in many enterprises are designed mainly to track open-source packages, which means they usually can’t identify risks from local AI models integrated into an application,” says Varun Badhwar, co-founder and CEO of Endor Labs. “Meanwhile, product and engineering teams are increasingly turning to open-source AI models to deliver new capabilities for customers. That’s why we’re excited to launch Endor Labs AI Model Discovery, which brings unprecedented security in open-source AI deployment.”

Endor Labs Latest Feature Helps Maintain Tighter Control of Technology

The new set of security capabilities are said to complement Endor Scores for AI Models—the recent release that uses 50 out-of-the-box metrics to score all AI models available on Hugging Face (the popular platform for sharing open source AI models and datasets) across four dimensions for security, popularity, quality and activity.

SSI Newsletter

Endor Labs points out that training new AI models is costly and time consuming, so most developers use open-source AI models from Hugging Face and adapt them for their specific purpose. These AI models function as critical application dependencies, and standard vulnerability scanners can’t accurately analyze them, presenting risk. There are more than 1 million open-source AI models and datasets available today through Hugging Face. Endor Labs asserts its solution spots these AI models, runs them through 50 risk checks, and allows security teams to set critical guardrails, all within existing developer workflows. This gives security teams the same level of visibility and control over AI models that they currently expect with other open source dependencies Endor Labs emphasizes.

The company adds that most users enjoying the benefits of the latest AI advances in the applications they use every day will be unaware of the dangers that may exist in the software development lifecycle. With these advances from Endor Labs, developers can safely adopt the latest open-source AI models when developing the next generation of applications.

“While vendors have rushed to incorporate AI into their security tooling, they’ve largely overlooked a critical need: Securing AI components used in applications,” comments Katie Norton, research manager, DevSecOps and Software Supply Chain Security at IDC.

“IDC research finds that 60% of organizations are choosing open source models over commercial ones for their most important GenAI initiatives, so finding and securing these components is critical for any dependency management program. Vendors like Endor Labs are addressing an urgent need by integrating AI component security directly into software composition analysis (SCA) workflows, while providing meaningful remediation capabilities that don’t overwhelm developers.”

More news from Security Sales: National Security Technician Day: FAST Kicks Off 2025 Celebration of Unsung Heroes

Strategy & Planning Series
Strategy & Planning Series
Strategy & Planning Series
Strategy & Planning Series