Deploying Prisme.ai on OpenShift
Guide and best practices for deploying Prisme.ai in a self-hosted environment on OpenShift.
Red Hat OpenShift provides a robust Kubernetes-based platform ideal for enterprises deploying Prisme.ai. This guide covers key considerations, deployment steps, and best practices specifically tailored for OpenShift environments.
OpenShift Prerequisites
Before starting deployment, ensure:
- You have a running OpenShift cluster (version 4.12+).
- The OpenShift CLI (
oc
) is installed and configured. - Administrator-level privileges for creating namespaces and resources.
Recommended OpenShift Infrastructure
Deploy Prisme.ai utilizing OpenShift native resources and external services:
OpenShift Cluster Configuration
OpenShift Cluster Configuration
- Recommended Cluster Resources:
- 3 master nodes (control plane), 3-5 worker nodes
- Worker nodes: 4 vCPU / 16 GB RAM minimum per node
- Best Practices:
- Enable Cluster Autoscaler for optimal resource management.
- Configure cluster for multi-zone or multi-region deployments for high availability.
Database (MongoDB)
Database (MongoDB)
- Options:
- Self-hosted MongoDB on OpenShift via StatefulSets
- Managed MongoDB services (e.g., MongoDB Atlas with VPC peering)
- Recommended Configuration:
- 3-node MongoDB replica set
- Best Practices:
- Persistent storage using OpenShift Storage Classes
- Regular database backups via CronJobs
Search Engine (Elasticsearch)
Search Engine (Elasticsearch)
- Options:
- Elasticsearch Operator for OpenShift
- Recommended Configuration:
- Elasticsearch Operator-managed cluster, minimum 3 nodes with 8GB RAM
- Best Practices:
- Utilize Persistent Volume Claims (PVC) for data durability
- Regular backups with OpenShift backup strategies
Redis Cache
Redis Cache
- Options:
- Redis Operator for OpenShift
- Managed Redis services
- Recommended Configuration:
- Redis cluster with 3 nodes for high availability
- Best Practices:
- Use PVC-backed storage
- Integrate monitoring with Prometheus and Grafana Operators
Object Storage (S3-Compatible)
Object Storage (S3-Compatible)
- Options:
- External S3-compatible storage like MinIO or AWS S3
- Best Practices:
- Configure separate buckets for public assets, private uploads, and model storage
- Enable lifecycle policies for efficient storage management
Persistent Storage (OpenShift Storage Classes)
Persistent Storage (OpenShift Storage Classes)
- Options:
- OpenShift Container Storage (OCS), NFS, Ceph RBD
- Recommended Configuration:
- Use storage classes supporting RWX for shared volumes (e.g., OCS)
- Best Practices:
- Schedule regular volume snapshots
- Monitor storage performance using OpenShift’s monitoring stack
OpenShift Deployment Steps
Create Project and Quotas
Create a dedicated project namespace and apply resource quotas:
Set up Operators and Databases
Install required Operators (Elasticsearch, Redis):
- Navigate to OperatorHub in OpenShift Console, install Elasticsearch and Redis Operators.
- Deploy MongoDB via StatefulSets or external services.
Deploy Object Storage Integration
Configure access to your chosen object storage (MinIO, AWS S3) using secrets and config maps:
Configure Routes and DNS
Set up OpenShift Routes for external access:
- API:
api.yourdomain.com
- Console:
studio.yourdomain.com
- Pages: wildcard route
*.pages.yourdomain.com
Deploy Prisme.ai via Helm
Use Helm 3 to deploy Prisme.ai:
Ensure values.yaml
reflects your environment settings.
Ingress & SSL with OpenShift Routes
Configure TLS certificates and termination via OpenShift routes:
Security Best Practices
RBAC & Project Isolation
- Utilize OpenShift’s built-in RBAC to enforce role-based permissions.
- Ensure clear separation between different Prisme.ai environments.
Network Policies
- Implement strict network policies for inter-service communication.
- Utilize OpenShift SDN for advanced network security.
Secrets Management
- Securely manage sensitive configurations using OpenShift Secrets.
- Regularly rotate and audit secrets usage.
Integrated Monitoring
- Leverage OpenShift’s integrated Prometheus and Grafana for proactive monitoring.
- Configure alerts and dashboards tailored to Prisme.ai workloads.