
Hack OpenAI LLMs' System Prompts by Reverse Prompt Engineering.

A curated repository of resources for Prompt Engineering focusing on GPT, ChatGPT, and PaLM.

A GitHub repository for prompt attack-defense, prompt injection, and reverse engineering notes and examples.

A unified evaluation framework for large language models.

A repository of the top 100 prompts on GPTStore for learning and improving prompt engineering.

A dataset of 15,140 ChatGPT prompts, including 1,405 jailbreak prompts, collected from various platforms for research purposes.