On Premise Deploy¶
As Prisme.ai infrastructure runs with Kubernetes, the easiest solution to deploy it on premise is Kubernetes or Docker, for which we can provide all needed manifests and configuration files.
HTTPS public endpoints¶
In the case of a Kubernetes cluster, it is preferable to deploy a load balancer that can act as a Kubernetes Ingress Controller for which we provide needed configurations. This kind of load balancer is natively provided by cloud providers supporting Kubernetes (i.e Google Cloud, Amazon Web Services, Azure).
In the case of a Docker setup, it is up to the customer to deploy the necessary services handling public addresses redirections towards internal Prisme.ai services in a secured HTTPS channel (i.e reverse proxies as traefik or nginx).
Our various services also require databases, for the deployment of which we do not offer any support.
For the easiest setup where different services can store their data in the same database instances, here are the various services to be provided :
- MongoDB (Prisme.ai accounts, user messages, workflows, dataflow data)
- Elasticsearch (monitoring logs & metrics, user messages for statistics computation, crawler-index documents)
- Disk volumes (NLU models)
- Redis (volatile data for real time messaging purposes, internal & persistent crawler data)
- Object storage S3-like (assistant pictures & other medias rendered in its workflows)
- Postgres (livechat internal data & user messagers transiting by livechat)
However, in order to guarantee the impermeability of the data as well as the ability to scale the infrastructure, we recommend the division of the following services:
- 1 MongoDB for Prisme.ai dashboard data (accounts, user messages, workflows)
- 1 MongoDB for dataflow data
- 1 Redis for real-time mechanisms (volatile data)
- 1 Redis for crawler internal data (webpages metadata like url or presentation image but not any content except for the section titles)