
20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.

A tutorial project for beginners on large model application development, integrating practical skills and theoretical knowledge.

Run and manage MCP servers easily and securely using ToolHive.

Breaker AI is an open-source CLI tool for security checks on LLM prompts.

AIHTTPAnalyzer enhances web application security testing by integrating AI capabilities into Burp Suite.

Breaker AI is a CLI tool that detects prompt injection risks and vulnerabilities in AI prompts.

Targeted Adversarial Examples on Speech-to-Text systems.

A CLI that provides a generic automation layer for assessing the security of ML models.

A PyTorch adversarial library for attack and defense methods on images and graphs.

Advbox is a toolbox for generating adversarial examples to test the robustness of neural networks across various frameworks.

A Python toolbox for adversarial robustness research, implemented in PyTorch.

TextAttack is a Python framework for adversarial attacks, data augmentation, and model training in NLP.