A curated collection of production-ready AI prompts, organised for humans and machines alike.
- Ready-to-Run: Each prompt is a self-contained YAML file with a clear
role,objective, andrequirements. - Searchable & Hackable: Browse prompts on the website or access them programmatically with the provided tools.
- Quality-Gated: Every contribution is linted and validated by CI to maintain high standards.
- Includes AI System Prompts: Contains system prompts from a variety of AI models, including Codex, ChatGPT, Claude, Cluely, Lovable, and more.
- Community-Driven: Contributions are welcome via Discussions, Issues, or Pull Requests.
Follow these steps to set up the project and its tools on your local machine.
First, clone the repository and navigate into the project directory:
git clone https://github.com/juliusbrussee/the-prompt-library.git
cd the-prompt-libraryIt is recommended to use a virtual environment. Then, install the required Python packages using pip:
# Create and activate a virtual environment (optional but recommended)
python3 -m venv .venv
source .venv/bin/activate
# Install dependencies
pip install -r src/prompt_toolkit/requirements.txtThe library uses a JSON index for fast prompt searching. Build it by running:
python3 -m src.prompt_toolkit.build_indexYou only need to run this once initially, or whenever you add or modify prompt files.
You can interact with the prompt library in several ways: as a command-line tool, by running automated workflows, or by connecting to it as an MCP server.
The interactive script (src/prompt_toolkit/interactive.py) is the primary way to find and use prompts.
find <query>: Searches the library for prompts matching your keywords.get <query>: Retrieves the single best-matching prompt.get <query> --interactive: Starts a guided session to select a prompt and fill in its placeholders.
Examples:
# Find relevant prompts by keyword
python3 -m src.prompt_toolkit.interactive find "code documentation"
# Get the best prompt for a task
python3 -m src.prompt_toolkit.interactive get "Create a unit test for my function"
# Use interactive mode to get help with a code review
python3 -m src.prompt_toolkit.interactive get "code review" --interactiveWorkflows chain multiple prompts together to accomplish complex tasks. They are defined in .workflow.yaml files inside the /prompt_workflows directory.
How to Run a Workflow:
Use the src/prompt_toolkit/workflow.py script to execute a workflow file.
# Example: Transform a user story into code and tests
python3 -m src.prompt_toolkit.workflow run prompt_workflows/user-story-to-code-test.workflow.yaml
# Example: Generate a content marketing campaign
python3 -m src.prompt_toolkit.workflow run prompt_workflows/content-marketing-campaign.workflow.yamlThe testing framework validates a prompt's output against a set of assertions using a real LLM. Tests are defined in .test.yaml files in the /tests directory.
LLM Configuration:
- Install LLM Libraries:
pip install google-generativeai openai - Set API Keys: For the tool to work, you must set your LLM provider's API key as an environment variable.
export GEMINI_API_KEY="YOUR_API_KEY" export OPENAI_API_KEY="YOUR_API_KEY" - Configure Defaults (Optional): You can set a default LLM provider and model in a
mcp_config.yamlfile at the project root.# mcp_config.yaml llm_config: default_provider: gemini default_model: gemini-pro
How to Run Tests:
Use the src/prompt_toolkit/testing.py script to run a test file.
# Run a test file using the default configured LLM
python3 -m src.prompt_toolkit.testing tests/unit_test_generator.test.yaml
# Override the LLM provider and model for a specific run
python3 -m src.prompt_toolkit.testing tests/unit_test_generator.test.yaml --llm-provider openai --llm-model gpt-4The library includes a server that exposes its tools over the Model Context Protocol (MCP), allowing clients like the Gemini CLI to connect to it.
Running the Server:
Once you have installed dependencies and built the index, you can start the server:
python3 -m src.prompt_toolkit.mcp_serverConnecting with Gemini CLI:
To connect the Gemini CLI to the server, create a .gemini/settings.json file in the root of this project with the following content:
{
"mcpServers": {
"prompt-library": {
"command": "python3",
"args": [
"-m",
"src.prompt_toolkit.mcp_server"
],
"transport": "stdio",
"trust": "trusted"
}
}
}With this file in place, running /mcp in the Gemini CLI from this directory will connect to the server and list its available tools.
prompt-library/
├── data/ # CSV / JSON bulk exports
├── docs/ # Website source code
├── prompts/ # Individual YAML prompts split by domain
├── prompt_workflows/ # Definitions for multi-step prompt sequences
├── schemas/ # YAML schemas for prompts, tests, and workflows
├── src/ # Python source code for all tooling
│ └── prompt_toolkit/
└── tests/ # Definitions for prompt test cases
All prompts adhere to a standard YAML format for consistency and machine readability.
| Field | Description |
|---|---|
role |
Persona framing the assistant |
objective |
One-sentence goal of the task |
requirements |
Bullet list of must-haves |
placeholders |
List of dynamic tokens like {topic} |
output_format |
Expected structure (e.g., Markdown table) |
tags |
Optional keywords for search |
The official schema is defined in schemas/prompt.schema.yaml.
We love new prompts! Please read CONTRIBUTING.md for style guidelines and open a PR. If you’re unsure, create a Discussion first.
MIT © Julius Brussee
If you use this library in academic work, please cite it.