This guide walks you through a fresh installation of a self-hosted Prisme.ai platform on v27, configured directly with the new platform products (Agent Creator, LLM Gateway, Storage — vector store, Governe, …) — without any legacy Knowledges / AI Store data to migrate. If you are upgrading an existing instance from legacy products, follow Migration v27 instead. The installation is split into four phases:Documentation Index
Fetch the complete documentation index at: https://docs.prisme.ai/llms.txt
Use this file to discover all available pages before exploring further.
This page assumes the platform itself is already deployed (databases, ingress, secrets management, etc.). If you are not at that point yet, start from the Self-Hosting Overview and choose a deployment path: Helm, Docker, or a cloud provider. For the broader product setup sequence (Governe, LLM Gateway, Storage, Agent Creator, Insights, Builder, Helper Agents), see Configuring Products.
1. Infrastructure setup
1.1 Use the unified Console image
In your Helmcore-values.yaml, configure the prismeai-console block to use the unified platform image:
1.2 Pin all service & app tags
Pin all core service and app image tags to the same v27 release. The available tags are listed on the Prisme.ai releases page.1.3 Configure LLM & vector store credentials
LLM Gateway and Storage workspaces consume credentials throughWORKSPACE_SECRET_* environment variables exposed on prismeai-runtime.
The string after
llm-gateway_ or storage_ is the secret name as it will be consumed by the LLM Gateway and Storage workspaces. The names you choose here must match the secret names referenced from the workspace configuration in step 4.- LLM providers
- Vector store
Declare every LLM provider credential as a
WORKSPACE_SECRET_llm-gateway_* variable.Examples:| Provider | Variable |
|---|---|
| AWS Bedrock | WORKSPACE_SECRET_llm-gateway_awsBedrockAccessKey |
| OpenAI | WORKSPACE_SECRET_llm-gateway_openaiApiKey |
| Azure OpenAI | WORKSPACE_SECRET_llm-gateway_azureOpenaiApiKey |
1.4 Deploy
Deploy the Helm chart and wait for all pods to roll out successfully.2. First connection
Log in as super admin
Once all services are deployed, sign in to the platform with a super admin account — the accounts listed under
config.admins in your Helm values.Create your organization
From the onboarding screen, create your organization:
- Name — the display name shown across the UI.
- Technical name — the unique identifier used throughout the platform. It cannot be changed after creation.
3. Products initialization
The v27 platform products are imported in four sequential groups via the Platform workspace bulk import:base1
Foundation apps (Custom Code, Prisme.ai API, …).
base2
Extended base (Crawler, NLU, RedisSearch, …).
extended
Legacy AI products (Knowledges, AI Store, …) — still required as a dependency for the v27 platform products. They will disappear in a future release.
one-product
Main v27 products (LLM Gateway, Storage, Governe, Agent Creator, …).
Trigger the bulk import
Navigate to Settings → Versions → Platform Pull, then select the Release vXXX platform repository.
Monitor progress
From the Activity feed of the Platform workspace, wait for the
workspaces.bulkImport.completed event before moving on.4. Post-install configuration
Once all groups are imported, configure each new workspace in the order below.4.1 Governe — set the admin token
The Governe workspace needs anadminAccessToken to call platform APIs on behalf of administrators.
Generate a long-term token
From your super admin account, generate a long-term API token (Settings → Account → API tokens).
4.2 LLM providers
Declare each provider
Click Add provider and pick the provider type (OpenAI, Azure OpenAI, AWS Bedrock, …). For each provider:
- Set the secret names to match the secrets you exposed via
WORKSPACE_SECRET_llm-gateway_*environment variables (or stored in the LLM Gateway workspace Secrets). - Configure provider-specific options (region, endpoint, deployment name, …).
(Optional) Configure secrets from the UI
If you’d rather store the secrets on the platform itself instead of through env vars:
- Open Builder in a new tab.
- Open the LLM Gateway workspace → Settings → Secrets.
- Add the secrets, using the same names as referenced by your providers.
All LLM providers can be exported and re-imported from the three-dot menu — useful for replicating configuration across environments.
4.3 LLM models
Declare each model
Click Add model and select the provider, model identifier, capabilities (completion / embedding / vision), and any default parameters.
Models can also be exported / imported in bulk from the three-dot menu.
4.4 Vector store
Set credential secret names
Make sure the credential secret names match the values you exposed via
WORKSPACE_SECRET_storage_* environment variables.To configure secrets directly from the UI instead:- Open Builder in a new tab.
- Open the Storage workspace → Settings → Secrets.
- Add the secrets.
Set the index prefix
Set
vector_store_index_prefix to the prefix you want for your RAG indexes (e.g. prod_rag).The raw vector store configuration lives in Storage workspace settings — accessible via the Edit button at the top right of the Storage workspace.
4.5 Infrastructure checkup
While on the Infrastructure page, use the Test button at the bottom of the Services block to verify connectivity to all platform databases.5. Organization configuration
5.1 Allowed models
Pick allowed models
Select all models you want to make available (you can use Select all). Models can be reordered by drag-and-drop.
5.2 Join rules
Join rules control which users automatically become members of your organization.Add a rule
Add a rule with:
- Field:
Email - Operator:
matches (wildcard) - Value:
*to match all authenticated users
5.3 Appearance
Configure your platform branding: name, favicon, colors, terms of use, …All appearance settings can be exported / imported via the three-dot menu in the top-right corner.
5.4 Menu editor
A default menu template is available here.Import it
Open the Menu Editor, click the three-dot menu in the top-right corner, and import the JSON file.
Next steps
Configuring Products
Reference for the full product setup sequence and per-product details.
Migration v27
Already running an older version? Use the migration guide instead.
Updates
Plan future upgrades and rollbacks.
Backups
Configure backups before going to production.