Skip to main content

Source: ocean/docs/sentry-slack-integration.md | ✏️ Edit on GitHub

Sentry Slack Integration Guide

This guide provides step-by-step instructions for integrating Sentry with Slack for your Ocean application.

Prerequisites

  • Sentry Team plan or higher (for Slack integration)
  • Admin access to your Slack workspace
  • Admin access to your Sentry organization

Quick Setup

1. Install Slack Integration in Sentry

  1. Navigate to Settings → Integrations in Sentry
  2. Search for "Slack"
  3. Click "Install" or "Configure" if already installed
  4. Click "Add Workspace"
  5. You'll be redirected to Slack to authorize the integration
  6. Select your workspace and click "Allow"

2. Configure Alert Destinations

After installation, configure which Slack channels receive alerts:

  1. Go to Settings → Integrations → Slack
  2. Click on your workspace
  3. Set default notification settings:
    • Default Channel: #alerts-errors
    • Include Tags: Yes
    • Include Rules: Yes

Setting Up Alert Rules

Critical Production Errors

Create an alert for critical errors that need immediate attention:

Name: '🚨 Critical Production Error'
Environment: production
Conditions:
When: An event is seen
Filters:
- Level: Error or Fatal
- Environment: production
- Tags:
- error.type NOT IN [NetworkError, ChunkLoadError]
- NOT error.message CONTAINS "extension://"
Actions:
- Send a Slack notification to #alerts-critical
- Notify on-call engineer (if configured)

High Error Rate Alert

Monitor for spikes in error rates:

Name: '📈 High Error Rate Detected'
Dataset: Errors
Conditions:
When: Error rate is above 5%
For: 10 minutes
Compared to: Same time yesterday
Actions:
- Send a Slack notification to #alerts-errors
- Include comparison chart

Slow Transaction Alert

Monitor performance degradation:

Name: '🐌 Slow Transaction Performance'
Dataset: Transactions
Conditions:
When: p95(transaction.duration) > 3000ms
For: 5 minutes
Filter: transaction.op:navigation
Actions:
- Send a Slack notification to #alerts-performance
- Include affected transactions

Database Performance Alert

Specific to your pg_net monitoring:

Name: "🗄️ Database Performance Issue"
Dataset: Custom Metrics
Conditions:
When: Any of these tags appear:
- db.slow_query = true
- pg_net_status = critical
- pg_net_status = warning
Actions:
- Send a Slack notification to #alerts-critical
- Page on-call DBA

Advanced Configuration

Custom Slack Message Formatting

Create a custom integration using webhooks for more control:

// src/lib/sentry-slack-formatter.ts
export function formatSentryEventForSlack(event: any) {
const isProduction = event.environment === 'production'
const emoji = getEmojiForLevel(event.level)

return {
text: `${emoji} ${event.title}`,
attachments: [
{
color: getColorForLevel(event.level),
title: event.title,
title_link: event.web_url,
fields: [
{
title: 'Environment',
value: event.environment,
short: true,
},
{
title: 'Level',
value: event.level,
short: true,
},
{
title: 'User',
value: event.user?.email || 'Unknown',
short: true,
},
{
title: 'Release',
value: event.release || 'N/A',
short: true,
},
],
text: event.culprit,
footer: 'Sentry',
footer_icon: 'https://sentry.io/favicon.ico',
ts: Math.floor(Date.now() / 1000),
},
],
}
}

function getEmojiForLevel(level: string): string {
switch (level) {
case 'fatal':
return '💀'
case 'error':
return '🔴'
case 'warning':
return '⚠️'
case 'info':
return 'ℹ️'
default:
return '📌'
}
}

function getColorForLevel(level: string): string {
switch (level) {
case 'fatal':
return '#E91E63'
case 'error':
return '#F44336'
case 'warning':
return '#FF9800'
case 'info':
return '#2196F3'
default:
return '#9E9E9E'
}
}

Alert Routing by Error Type

Configure different channels for different error types:

// Routing configuration
const ALERT_ROUTING = {
// Critical errors go to #alerts-critical
critical: {
channel: '#alerts-critical',
conditions: [
'payment failed',
'subscription error',
'database connection lost',
'pg_net critical',
],
},

// Security issues go to #alerts-security
security: {
channel: '#alerts-security',
conditions: [
'authentication failed',
'unauthorized access',
'csrf token mismatch',
'rate limit exceeded',
],
},

// Performance issues go to #alerts-performance
performance: {
channel: '#alerts-performance',
conditions: ['slow query', 'high memory usage', 'transaction timeout', 'api response slow'],
},
}

Slack Notification Actions

Set up quick actions in Slack notifications:

// Add action buttons to critical alerts
export function addSlackActions(message: any, event: any) {
message.attachments[0].actions = [
{
type: 'button',
text: 'View in Sentry',
url: event.web_url,
style: 'primary',
},
{
type: 'button',
text: 'View User',
url: `https://app.supabase.com/project/YOUR_PROJECT/auth/users/${event.user?.id}`,
style: 'default',
},
{
type: 'button',
text: 'Acknowledge',
url: `${event.web_url}/acknowledge/`,
style: 'danger',
},
]

return message
}

Alert Rule Templates

1. Authentication Failures

Name: '🔐 Authentication Failure Spike'
Conditions:
When: Count of "Auth failed" > 20
Within: 10 minutes
Filters:
- message CONTAINS "Auth failed"
- environment = production
Actions:
- Send to #alerts-security
- Create Jira ticket (if integrated)

2. Payment Processing Errors

Name: '💳 Payment Processing Failed'
Conditions:
When: An event is seen
Filters:
- message CONTAINS ["stripe", "payment", "billing"] AND "failed"
- level = error
Actions:
- Send to #alerts-critical
- Page on-call engineer
- Create incident in PagerDuty

3. pg_net Backlog Alert

Name: '📊 pg_net Queue Backlog'
Conditions:
When: An event is seen
Filters:
- message CONTAINS "pg_net backlog"
- tags.pg_net_status IN [warning, critical]
Actions:
- Send to #alerts-critical
- Include queue size in message
- Trigger cleanup job

4. Memory Leak Detection

Name: '💾 High Memory Usage'
Conditions:
When: Custom metric memory.usage > 90%
For: 30 minutes
Actions:
- Send to #alerts-performance
- Include memory graph
- Suggest restart

Slack Channel Setup

#alerts-critical       - Immediate action required
#alerts-errors - Application errors
#alerts-performance - Performance degradation
#alerts-security - Security events
#alerts-deployments - Deploy notifications
#alerts-test - Test environment alerts

Channel Configuration

For each alert channel:

  1. Set channel topic: "🚨 Automated alerts from Sentry - Do not post here"
  2. Pin important messages:
    • Alert response runbook
    • Escalation contacts
    • Common fixes guide
  3. Set up channel bookmarks:
    • Sentry dashboard
    • Supabase dashboard
    • Monitoring dashboard

Testing Your Integration

1. Send Test Alert

In Sentry, you can test your Slack integration:

// Test from your application
Sentry.captureMessage('Test Slack Alert', {
level: 'error',
tags: {
test: true,
channel: '#alerts-test',
},
})

2. Verify Alert Routing

Test each alert type:

// Test critical alert
Sentry.captureException(new Error('Test critical error - payment failed'), {
tags: { severity: 'critical', component: 'billing' },
})

// Test security alert
Sentry.captureMessage('Test security alert - Auth failed', {
level: 'warning',
tags: { type: 'security', action: 'login' },
})

// Test performance alert
Sentry.captureMessage('Test performance - Slow query detected', {
level: 'info',
tags: { db_slow_query: true, duration_ms: 5000 },
})

3. Test Alert Storm Protection

Sentry automatically groups similar errors, but you can test the behavior:

// This should only create one Slack notification
for (let i = 0; i < 100; i++) {
Sentry.captureException(new Error('Test grouped error'))
}

Best Practices

1. Alert Fatigue Prevention

  • Set appropriate thresholds
  • Use error grouping effectively
  • Implement quiet hours for non-critical alerts
  • Regular review and tuning of alerts

2. Message Clarity

Include essential information in every alert:

  • What happened
  • Where it happened (service, function, line)
  • When it happened
  • Who was affected (user ID, org)
  • What action to take

3. Integration Maintenance

  • Review alert rules monthly
  • Archive old alert channels quarterly
  • Test critical alerts after each deployment
  • Document alert response procedures

Troubleshooting

Common Issues

  1. "Notifications not appearing in Slack"

    • Verify Slack integration is active in Sentry
    • Check channel permissions
    • Verify alert rules are enabled
  2. "Too many notifications"

    • Adjust alert thresholds
    • Enable alert grouping
    • Use rate limiting
  3. "Missing context in alerts"

    • Ensure user context is set
    • Add custom tags to errors
    • Include breadcrumbs

Debug Commands

Test Slack connection from Sentry:

  1. Go to Settings → Integrations → Slack
  2. Click "Test Configuration"
  3. Select a channel and send test message

Next Steps

  1. Set up PagerDuty integration for critical alerts
  2. Create runbooks for common alerts
  3. Implement alert acknowledgment workflow
  4. Set up Slack slash commands for quick actions