
20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.

A repository for practical notes on building applications using LLM.

A tutorial project for beginners on large model application development, integrating practical skills and theoretical knowledge.

Adversarially robust phishing email detection using DistilBERT, adversarial training, and a real-time Gradio interface.

Targeted Adversarial Examples on Speech-to-Text systems.

A PyTorch adversarial library for attack and defense methods on images and graphs.

Advbox is a toolbox for generating adversarial examples to test the robustness of neural networks across various frameworks.

TextAttack is a Python framework for adversarial attacks, data augmentation, and model training in NLP.

A controllable SONAR image generation framework utilizing text-to-image diffusion and GPT prompting for enhanced diversity and realism.

Open-source LLM Prompt-Injection and Jailbreaking Playground for evaluating LLM security vulnerabilities.

Discover the leaked system instructions and prompts for ChatGPT's custom GPT plugins.

Guard your LangChain applications against prompt injection with Lakera ChainGuard.