Skip to content
English
All posts

BrightOnLABS featured by OWASP

    Screenshot From 2026-03-30 18-08-02 

Securing the Future of Agentic AI: BrightOnLABS Contributes to the New OWASP AIVSS Scoring System

At BrightOnLABS, we believe that as AI systems evolve from simple chatbots into autonomous agents capable of taking real-world actions, our approach to security must evolve just as quickly. That is why we are thrilled to announce our contribution to the newly released OWASP AIVSS Scoring System for Agentic AI Core Security Risks (v0.8).

Why This Matters

As AI agents become more deeply integrated into enterprise workflows, they move beyond "predicting text" to "performing tasks." This autonomy introduces a new frontier of vulnerabilities. The OWASP Agentic AI Vulnerability Scoring System (AIVSS) is a community-driven effort to make these risks structured, measurable, and most importantly, easier to communicate across technical and leadership teams.

Our Contribution

BrightOnLABS is honored to be cited as a contributor to this milestone publication. Our team had the opportunity to collaborate specifically on the scoring methodology, shaping sections 3.3, 3.4, and 3.5 of the paper available here.

Our work focused on refining how risk is quantified, ensuring that security professionals have a clear, objective framework to evaluate the impact and likelihood of threats unique to agentic systems. By contributing to these sections, we aim to help organizations move away from "best guesses" and toward data-backed security postures.

Building a Safer AI Ecosystem

Security is not a solo sport. Community-driven initiatives like those from OWASP are essential for creating the standards that will protect the next generation of digital infrastructure.

“It was a real pleasure to contribute to work focused on making agentic AI risk more structured and measurable,” says the BrightOnLABS team. “We believe this kind of collaboration is essential as AI systems become more autonomous and more deeply integrated into our daily environments.”

Need Support Securing Your AI?

The landscape of AI risk is shifting beneath our feet. If you need support in your security efforts, including the complex task of securing your AI models and agentic workflows, BrightOnLABS can help you. Whether you are looking for a security audit, risk assessment, or a strategic roadmap for AI integration, our team is ready to ensure your innovation remains protected.

Contact us:
contact@brightonx.ai
contact@brightonlabs.ai 

Cheers
BrightOnLABS


  •