Configuring AI Knowledge
Set up the AI Knowledge product for RAG (Retrieval-Augmented Generation) with model orchestration, rate limits, and vector stores.
Configuring AI Knowledge
AI Knowledge is Prisme.ai’s product for agentic assistants powered by tools and retrieval-augmented generation (RAG). It enables teams to build agents that leverage internal knowledge across various formats, interact with APIs via tools, and collaborate with other agents through context sharing — enabling true multi-agent workflows with robust LLM support and enterprise-grade configuration options.
This guide explains how to configure AI Knowledge in a self-hosted environment.
Core Capabilities
- Configure multi-model support with failover and fine-tuned prompts
- Automate agent provisioning via AI Builder
- Enforce limits, security, and monitoring
- Enable builtin tools like summarization, search, code interpreter, web browsing
- Integrate with OpenSearch, Redis, or other vector stores
LLM Providers
Global Configuration
Models Configuration
Configure all available models with descriptions, rate limits, and failover:
Vector Store Configuration
To enable retrieval-based answers, configure a vector store:
Or with OpenSearch:
Tools and Capabilities
AI Knowledge enables advanced agents via tools.
file_search
RAG tool for semantic search within indexed documents.
file_summary
Summarize entire files when explicitly requested.
documents_rag
Used to extract context from project knowledge collections.
web_search
Optional tool enabled via Serper API key:
code_interpreter
Python tool for data manipulation and document-based computation.
image_generation
Uses DALL-E or equivalant if enabled in LLM config.
Advanced Features
Next Steps
Was this page helpful?