Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.prisme.ai/llms.txt

Use this file to discover all available pages before exploring further.

This guide walks you through a fresh installation of a self-hosted Prisme.ai platform on v27, configured directly with the new platform products (Agent Creator, LLM Gateway, Storage — vector store, Governe, …) — without any legacy Knowledges / AI Store data to migrate. If you are upgrading an existing instance from legacy products, follow Migration v27 instead. The installation is split into four phases:
1

Infrastructure setup

Deploy the platform with Helm, image tags, and environment variables.
2

First connection

Log in as super admin and create your organization.
3

Products initialization

Import the new platform workspaces in the correct order.
4

Post-install configuration

Declare LLM providers, models, vector store and organization settings.
This page assumes the platform itself is already deployed (databases, ingress, secrets management, etc.). If you are not at that point yet, start from the Self-Hosting Overview and choose a deployment path: Helm, Docker, or a cloud provider. For the broader product setup sequence (Governe, LLM Gateway, Storage, Agent Creator, Insights, Builder, Helper Agents), see Configuring Products.

1. Infrastructure setup

1.1 Use the unified Console image

In your Helm core-values.yaml, configure the prismeai-console block to use the unified platform image:
prismeai-console:
  enabled: true
  image:
    repository: registry.gitlab.com/prisme.ai/prisme.ai/prisme.ai-platform
    tag: ...

1.2 Pin all service & app tags

Pin all core service and app image tags to the same v27 release. The available tags are listed on the Prisme.ai releases page.

1.3 Configure LLM & vector store credentials

LLM Gateway and Storage workspaces consume credentials through WORKSPACE_SECRET_* environment variables exposed on prismeai-runtime.
The string after llm-gateway_ or storage_ is the secret name as it will be consumed by the LLM Gateway and Storage workspaces. The names you choose here must match the secret names referenced from the workspace configuration in step 4.
Declare every LLM provider credential as a WORKSPACE_SECRET_llm-gateway_* variable.Examples:
ProviderVariable
AWS BedrockWORKSPACE_SECRET_llm-gateway_awsBedrockAccessKey
OpenAIWORKSPACE_SECRET_llm-gateway_openaiApiKey
Azure OpenAIWORKSPACE_SECRET_llm-gateway_azureOpenaiApiKey
Alternatively, these secrets can be entered directly from the Secrets page of the LLM Gateway and Storage workspaces, without touching environment variables.

1.4 Deploy

Deploy the Helm chart and wait for all pods to roll out successfully.
helm -n prismeai-core upgrade --install prismeai-core -f core-values.yaml prismeai/prismeai-core
helm -n prismeai-apps upgrade --install prismeai-apps -f apps-values.yaml prismeai/prismeai-apps
Verify the platform is healthy via the Readiness API before continuing.

2. First connection

1

Log in as super admin

Once all services are deployed, sign in to the platform with a super admin account — the accounts listed under config.admins in your Helm values.
2

Create your organization

From the onboarding screen, create your organization:
  • Name — the display name shown across the UI.
  • Technical name — the unique identifier used throughout the platform. It cannot be changed after creation.
3

Open Builder

After creation, you should be redirected to a near-empty platform with two links in the left menu: Builder and Govern. Open Builder.

3. Products initialization

The v27 platform products are imported in four sequential groups via the Platform workspace bulk import:

base1

Foundation apps (Custom Code, Prisme.ai API, …).

base2

Extended base (Crawler, NLU, RedisSearch, …).

extended

Legacy AI products (Knowledges, AI Store, …) — still required as a dependency for the v27 platform products. They will disappear in a future release.

one-product

Main v27 products (LLM Gateway, Storage, Governe, Agent Creator, …).
For each group:
1

Open the Platform workspace

The Platform workspace is only visible to super admins.
2

Trigger the bulk import

Navigate to SettingsVersionsPlatform Pull, then select the Release vXXX platform repository.
3

Select the group and start the import

Pick the group, start the import, and close the modal.
4

Monitor progress

From the Activity feed of the Platform workspace, wait for the workspaces.bulkImport.completed event before moving on.
5

Repeat for the next group

Import the groups in order: base1base2extendedone-product.
Always wait for workspaces.bulkImport.completed (with no errors) before importing the next group — each group depends on the previous one.

4. Post-install configuration

Once all groups are imported, configure each new workspace in the order below.

4.1 Governe — set the admin token

The Governe workspace needs an adminAccessToken to call platform APIs on behalf of administrators.
1

Generate a long-term token

From your super admin account, generate a long-term API token (Settings → Account → API tokens).
2

Paste it into the workspace

In Builder, open the Governe workspace → SettingsSecrets, paste the value into adminAccessToken, then Save.
See Configuring Governe for the full token command and workspace setup.

4.2 LLM providers

1

Open the Govern app

From the left menu, open GovernModelsProviders tab.
2

Declare each provider

Click Add provider and pick the provider type (OpenAI, Azure OpenAI, AWS Bedrock, …). For each provider:
  • Set the secret names to match the secrets you exposed via WORKSPACE_SECRET_llm-gateway_* environment variables (or stored in the LLM Gateway workspace Secrets).
  • Configure provider-specific options (region, endpoint, deployment name, …).
Hover the (i) icon next to a secret input to see the matching environment variable name.
3

(Optional) Configure secrets from the UI

If you’d rather store the secrets on the platform itself instead of through env vars:
  1. Open Builder in a new tab.
  2. Open the LLM Gateway workspace → SettingsSecrets.
  3. Add the secrets, using the same names as referenced by your providers.
4

Save

Save the providers configuration.
All LLM providers can be exported and re-imported from the three-dot menu — useful for replicating configuration across environments.

4.3 LLM models

1

Open the Models tab

Still in GovernModels, switch to the Models tab.
2

Declare each model

Click Add model and select the provider, model identifier, capabilities (completion / embedding / vision), and any default parameters.
3

Verify with the Test button

For every model:
  • Open it and click Test — the model response is shown below the button.
  • For embedding models, the dimensions option must be set explicitly.
Models can also be exported / imported in bulk from the three-dot menu.

4.4 Vector store

1

Open the Infrastructure page

From the Govern menu, open Infrastructure.
2

Pick a driver

Select your vector store driver: Elasticsearch or OpenSearch.
3

Set credential secret names

Make sure the credential secret names match the values you exposed via WORKSPACE_SECRET_storage_* environment variables.
Hover the (i) icon next to a secret input to see the matching environment variable name.
To configure secrets directly from the UI instead:
  1. Open Builder in a new tab.
  2. Open the Storage workspace → SettingsSecrets.
  3. Add the secrets.
4

Set the index prefix

Set vector_store_index_prefix to the prefix you want for your RAG indexes (e.g. prod_rag).
5

Save and Test

Save the configuration, then click Test. If the test fails, review your environment variables, secrets, or database connectivity and try again.
The raw vector store configuration lives in Storage workspace settings — accessible via the Edit button at the top right of the Storage workspace.

4.5 Infrastructure checkup

While on the Infrastructure page, use the Test button at the bottom of the Services block to verify connectivity to all platform databases.

5. Organization configuration

5.1 Allowed models

1

Open your organization

Open Organizations and select the organization you created in step 2.
2

Open Agent controls

Navigate to Agent controls.
3

Pick allowed models

Select all models you want to make available (you can use Select all). Models can be reordered by drag-and-drop.
4

Set defaults

Choose the default Completions and Embeddings models for the organization.
5

Save

Scroll to the bottom of the page and click Save.

5.2 Join rules

Join rules control which users automatically become members of your organization.
1

Open Join Rules

Navigate to the Join Rules page.
2

Add a rule

Add a rule with:
  • Field: Email
  • Operator: matches (wildcard)
  • Value: * to match all authenticated users
Or configure a more specific filter if you only want a subset of users to join automatically. Other users can still join with an invite code or be invited manually by an org admin.
3

Pick the assigned role

On a fresh install there is no legacy role mapping to fall back on, so set Assign role explicitly — typically org:member for the default rule, and add stricter rules above it for admins / builders.
For more advanced rules — assigning different roles or groups based on email or SSO metadata — see the Join Rules documentation.

5.3 Appearance

Configure your platform branding: name, favicon, colors, terms of use, …
All appearance settings can be exported / imported via the three-dot menu in the top-right corner.

5.4 Menu editor

A default menu template is available here.
1

Download the template

Download the JSON file from the URL above.
2

Import it

Open the Menu Editor, click the three-dot menu in the top-right corner, and import the JSON file.
3

Save and refresh

Save your changes and refresh the page — the left menu should now be populated.

Next steps

Configuring Products

Reference for the full product setup sequence and per-product details.

Migration v27

Already running an older version? Use the migration guide instead.

Updates

Plan future upgrades and rollbacks.

Backups

Configure backups before going to production.