Skip to main content
As you become more proficient with AI Builder, understanding and leveraging its event-driven architecture can help you build more sophisticated, powerful, and efficient applications. This guide explores advanced topics focused on event-driven patterns and their practical applications.

Event-Driven Architecture (EDA)

  • Core Concepts
  • Key Benefits
  • Implementation in Prisme.ai
Event-driven architecture is the foundation of AI Builder’s flexibility:
  • Events as First-Class Citizens: All system and user actions generate events
  • Decoupled Components: Services communicate through events, not direct calls
  • Asynchronous Processing: Actions occur independently of event producers
  • Scalability: Components can scale independently based on event load
  • Extensibility: New capabilities can subscribe to existing event streams
In Prisme.ai, events flow through the system as messages containing:
  • An event type (e.g., message.created, user.login)
  • A payload with event-specific data
  • Metadata about the source, timestamp, and routing information

Working with Events

1

Emitting Events

In automations, you can emit events to trigger other processes:
- emit:
    event: user-action-completed
    payload:
      userId: "{{user.id}}"
      action: "profile-update"
      timestamp: "{{now}}"
Blocks can also emit events when users interact with them:
- slug: Action
  text: Update Profile
  type: event
  value: user-update-profile
  payload:
    section: "personal-info"
These events flow through the system and can trigger other automations or be recorded for analysis.
2

Listening for Events

Automations can be triggered by specific events:
slug: "process-profile-update"
name: "Process Profile Update"
when:
  events:
    - user-update-profile
do:
  - set: payload
    value: "{{event.payload}}"
  - callAPI:
      method: POST
      url: /api/profiles/update
      body: "{{payload}}"
This creates a chain of actions that can flow through your application, each step triggered by the completion of previous steps.
3

Accessing Event History

View event history in several ways:
  • Activity Tab: See recent events in your workspace
  • Event Explorer: Query and filter events for analysis
  • Elasticsearch/OpenSearch: Advanced querying for deeper analysis
The complete event stream provides valuable insights into application usage, performance, and user behavior.
4

Analyzing Event Patterns

Advanced analytics can reveal important patterns:
  • User Journeys: Track how users move through your application
  • Bottlenecks: Identify where processes slow down
  • Error Patterns: Detect recurring issues
  • Usage Trends: See how usage evolves over time
  • Feature Adoption: Measure which features are most used
These insights drive continuous improvement of your applications.

Advanced Event Analytics

Every event in your workspace is stored in Elasticsearch/OpenSearch, enabling custom analysis:

System Mapping

Create visual maps of your systems based on actual usage:
    Track event flows between componentsVisualize user journeys through your applicationIdentify unused features or dead-end pathsDiscover unexpected usage patternsMap integration points with external systems

Usage Analytics

Understand how users engage with your applications:
    Measure feature adoption and frequency of useTrack user session patterns and durationIdentify popular and underutilized featuresAnalyze conversion funnels and drop-off pointsSegment users by behavior patterns

Performance Monitoring

Track system performance metrics:
    Measure response times for different operationsIdentify bottlenecks in processing flowsTrack API usage and latencyMonitor automation execution timesAnalyze resource utilization patterns

Pattern Discovery

Find meaningful patterns in your event data:
    Discover common user behavior sequencesIdentify correlations between eventsDetect anomalies that may indicate issuesRecognize seasonal or time-based patternsFind optimization opportunities

Event Mapping for Analytics

As part of Prisme.ai’s event-driven architecture, we process events structured with a dynamic identifier payload. To ensure consistent and efficient aggregation in both Elasticsearch and OpenSearch, it’s essential to explicitly define the mapping for fields used in the payload.Without proper mapping, you may encounter issues such as:
  • Aggregation inconsistencies between Elasticsearch and OpenSearch
  • Fields interpreted with incorrect data types
  • Performance issues with complex queries
  • Limitations in available analysis capabilities

Reliability and Consistency

Ensures uniform data treatment:
    Consistent field types across all eventsPrevents errors caused by automatic inferenceGuarantees that aggregations work properlyMaintains data integrity over time

Performance Optimization

Improves query and analysis speed:
    Optimizes indexing for known data structuresEnables more efficient storage patternsImproves complex aggregation performanceReduces processing overhead for queries

Maintenance and Scalability

Simplifies ongoing management:
    Easy-to-read YAML configurationWorkspace-specific mapping definitionsSimplified schema evolutionBetter documentation of data structures

Cross-Platform Compatibility

Works consistently across search engines:
    Identical behavior in Elasticsearch and OpenSearchConsistent query results across environmentsReliable migrations between search technologiesFuture-proof for search engine updates
To implement explicit event mapping, add configuration to your workspace YAML:
events:
  types:
    usage:
      schema:
        usage:
          type: object
          title: Usage
          properties:
            total_tokens:
              type: number
            completion_tokens:
              type: number
            prompt_tokens:
              type: number
            cost:
              type: number
              format: double
            firstTokenDuration:
              type: number
This example defines the schema for the usage event type, specifying that fields like usage.total_tokens and usage.cost should be treated as numeric values with specific formats.
When implementing:
  1. Identify the event types requiring explicit mapping
  2. Define their schema with appropriate data types
  3. Add the configuration to your Workspace
  4. Emit sample events
  5. Test with analytical queries to verify proper behavior
When declaring a new field inside events mapping, ES/OS should take it into account for any upcoming event.
When updating an existing field inside events mapping, the ES/OS index must be rollover’ed to take it into account for future events.
Here’s how you might use mapped events for advanced analytics:
// Query to analyze token usage patterns over time
const usageAnalytics = await searchEvents({
  "query": {
    "bool": {
      "filter": [
        {
          "term": {
            "type": "usage"
          }
        }
      ]
    }
  },
  aggs: {
    usage_over_time: {
      date_histogram: {
        field: "createdAt",
        interval: "day"
      },
      aggs: {
        total_tokens: {
          sum: {
            field: "aggPayload.usage.total_tokens"
          }
        },
        avg_cost: {
          avg: {
            field: "aggPayload.usage.cost"
          }
        },
        models: {
          terms: {
            field: "aggPayload.model"
          }
        }
      }
    }
  }
});

// Results can be used for visualizations, billing, or optimization
With proper mapping, these aggregations will be fast and accurate, providing valuable insights into application usage and performance.
Note our aggregation needs to target aggPayload.* to benefit from our custom mapping

Practical Event-Driven Patterns

Track and analyze user behavior:
# Emit events for user actions
- emit:
    event: user-action
    payload:
      action: page-view
      page: "{{page.slug}}"
      sessionId: "{{session.id}}"
      timestamp: "{{now}}"
These events can be analyzed to:
  • Create user journey maps
  • Identify popular features
  • Measure engagement
  • Detect unusual behavior
  • Personalize experiences based on usage patterns
Monitor and optimize AI model usage:
# Emit events for AI model usage
- emit:
    event: model-usage
    payload:
      model: "{{model.name}}"
      prompt_tokens: "{{result.usage.prompt_tokens}}"
      completion_tokens: "{{result.usage.completion_tokens}}"
      total_tokens: "{{result.usage.total_tokens}}"
      duration: "{{result.duration}}"
      cost: "{{result.cost}}"
This data enables:
  • Cost tracking and optimization
  • Performance benchmarking
  • Model selection refinement
  • Usage pattern analysis
  • Identifying optimization opportunities
Implement complex business processes through event chains:
# First automation: Document processing initiated
- emit:
    event: document-processing-started
    payload:
      documentId: "{{document.id}}"
      stage: "initiated"
      
# Second automation (triggered by the first event)
when:
  events:
    - document-processing-started
do:
  # Process document...
  - emit:
      event: document-processing-completed
      payload:
        documentId: "{{event.payload.documentId}}"
        stage: "processed"
        
# Third automation (triggered by the second event)
when:
  events:
    - document-processing-completed
do:
  # Generate summary...
  - emit:
      event: document-summary-ready
      payload:
        documentId: "{{event.payload.documentId}}"
        stage: "summarized"
This approach creates modular, maintainable workflows that are:
  • Easily extendable with new steps
  • Resilient to failures (steps can be retried individually)
  • Transparent (full visibility into process state)
  • Analyzable (measure performance of each step)