A collection of 500+ real-world ML & LLM system design case studies from 100+ companies.
Course to get into Large Language Models (LLMs) with roadmaps and Colab notebooks.
An AI companion that enhances paper reading with interactive features and a quirky AI professor persona.
Awesome LLM pre-training resources, including data, frameworks, and methods.
A repository for practical notes on building applications using LLM.
Repository accompanying a paper on Red-Teaming for Large Language Models (LLMs).
Curated reading list for adversarial perspective and robustness in deep reinforcement learning.
A Python toolbox for adversarial robustness research, implemented in PyTorch.
TextAttack is a Python framework for adversarial attacks, data augmentation, and model training in NLP.
An adversarial example library for constructing attacks, building defenses, and benchmarking both.
This research proposes defense strategies against prompt injection in large language models to improve their robustness and security against unwanted outputs.