Overview
Learn how to deploy additional microservices for specialized Prisme.ai applications such as Custom Code, Crawler, and AI Knowledge
Prisme.ai’s architecture includes specialized microservices that support specific applications like Custom Code, Crawler, Search Engine … This guide explains how to deploy these additional microservices in your self-hosted environment.
License Requirement
The microservices discussed in this guide are available based on your subscription license. Ensure your license includes access to these components before proceeding with deployment.
Access Requirements
You will need valid GitLab credentials to access the Docker images for these microservices. If you don’t have them yet, please contact support@prisme.ai to obtain a GitLab username and token.
These credentials are typically provided as a GitLab Deploy Token with appropriate permissions to pull the required images.
Deployment Strategy
We will deploy the apps microservices in the same Kubernetes cluster as the core microservices. However, for better resource isolation and management, we recommend using a separate namespace for these additional services.
Prerequisites
Each microservice has specific requirements that must be fulfilled before deployment. Review the prerequisites for each service you plan to deploy:
prismeai-crawler
Web crawling and indexing service
prismeai-functions
Custom code execution environment
prismeai-searchengine
Search functionality for crawled content
Deployment Process
Follow these steps to deploy the apps microservices in your Kubernetes cluster:
Retrieve the Helm Charts
You have two options for accessing the required Helm charts:
Option 1: Download the charts directly
Download the Helm chart from the following URL:
Extract the archive to access the chart files.
Option 2: Add as a Helm repository
Then generate a values file template:
Configure Values File
Edit the values.yaml
file to include connection details and credentials for external services:
Key configuration areas include:
- Container registry credentials: Your GitLab access details
- Service-specific settings: Configuration for each microservice
- Database configurations: Connection details for required databases
- Resource allocations: CPU, memory, and storage requirements
- Network settings: Service endpoints and ports
Refer to each service’s documentation for specific configuration requirements.
Create Namespace
Create a dedicated namespace for the apps microservices:
This separation provides better resource isolation and management compared to deploying everything in the default namespace.
Deploy using Helm
Choose the appropriate deployment command based on how you retrieved the charts:
If you downloaded the charts (Option 1):
If you added the repo (Option 2):
The deployment will create all necessary Kubernetes resources in the apps
namespace.
Verify Deployment
Check that all pods are running correctly:
Ensure all services show Running
status and are ready (e.g., 1/1
for readiness).
You can get more detailed information about any pod with:
Testing the Microservices
After deployment, test each microservice to ensure it’s functioning correctly:
prismeai-crawler and prismeai-searchengine
prismeai-crawler and prismeai-searchengine
Follow the testing procedures in the prismeai-crawler documentation.
Typical tests include:
- Creating a crawl job for a test website
- Verifying content is properly indexed
- Testing search functionality with simple queries
- Checking crawler logs for any errors
prismeai-functions
prismeai-functions
Refer to the prismeai-functions testing guide.
Key validation steps:
- Executing a simple function through the API
- Verifying resource limits are properly enforced
- Testing error handling for invalid code
- Checking integration with other Prisme.ai components
prismeai-llm
prismeai-llm
Use the prismeai-llm testing procedures to verify functionality.
Important tests include:
- Testing model inference with a simple prompt
- Verifying token counting functionality
- Checking integration with supported models
- Validating logging and monitoring features
Troubleshooting Common Issues
Image Pull Errors
Image Pull Errors
Symptom: Pods show ImagePullBackOff
status
Possible causes:
- Invalid GitLab credentials
- Incorrect image repository URL
- Network connectivity issues
Resolution steps:
- Verify your GitLab credentials are correct
- Check the image repository URL in your values file
- Create a Kubernetes secret with your credentials:
- Update your deployment to use this secret
Configuration Errors
Configuration Errors
Symptom: Pods start but quickly crash or enter CrashLoopBackOff
Possible causes:
- Missing or incorrect environment variables
- Invalid database connection details
- Insufficient permissions or resources
Resolution steps:
- Check pod logs for specific error messages:
- Verify database connectivity from within the cluster
- Ensure all required environment variables are set
- Check resource allocations match the service requirements
Service Connectivity Issues
Service Connectivity Issues
Symptom: Services start but can’t communicate with each other
Possible causes:
- Incorrect service names or ports
- Network policies blocking traffic
- DNS resolution problems
Resolution steps:
- Verify service endpoints using:
- Test connectivity using a debug pod:
- Check network policies that might be restricting traffic
- Ensure CoreDNS is functioning properly
Upgrading Microservices
When new versions of the apps microservices become available:
Update Helm Repository
If using the Helm repository approach:
Check for Changes
Review the changes in the new version:
Update your values file as needed to accommodate any new configuration options.
Perform the Upgrade
Upgrade the deployment with:
Or if using the downloaded chart:
Verify Upgrade
Check that all pods are running the new version:
And verify functionality using the testing procedures mentioned above.
Next Steps
After successfully deploying the apps microservices:
Custom Code App
Set up the use custom code capabilities
Set Up Web Crawling
Configure crawling & search services
Configure LLM Access
Set up access to various local language models
For any issues or questions during the deployment process, contact support@prisme.ai for assistance.