Self-Hosted Reverse Proxy
Adding a reverse proxy in front of your Sentry deployment is strongly recommended for one big reason: you can fine tune every configuration to fit your current setup. A dedicated reverse proxy that does SSL/TLS termination that also forwards the client IP address as Docker Compose internal network (as this is close to impossible to get otherwise) would give you the best Sentry experience.
Once you have setup a reverse proxy to your Sentry instance, you should modify the system.url-prefix
in the config.yml
file to match your new URL and protocol. You should also update the SSL/TLS section in the sentry/sentry.conf.py
script, otherwise you may get CSRF-related errors when performing certain actions such as configuring integrations.
Keep in mind that all this setup uses single-nodes for all services, including Kafka. For larger loads, you'd need a beefy machine with lots of RAM and disk storage. To scale up even further, you are very likely to use clusters with a more complex tool, such as Kubernetes. Due to self-hosted installations' very custom nature, we do not offer any recommendations or guidance around scaling up. We do what works for us for our thousands of customers over at sentry.io and would love to have you over when you feel your local install's maintenance becomes a burden instead of a joy.
Enabling HTTPS
We recommend TLS termination to be done on your own dedicated load balancer or proxy. Although you can set it on the nginx.conf
file, it is not recommended as newer self-hosted releases might alter some configurations on the file. Some examples are available on Reverse Proxy Examples section.
Expose Only Ingest Endpoint Publicly
Certain self-hosted deployments requires the dashboard to be accessed only via internal network. But, they also need to provide public Sentry ingestion endpoint for client devices such as mobile and desktop apps. You can expose some of these endpoints publicly:
/api/[1-9]\d*/envelope/
- Main endpoint for submitting event from SDK/api/[1-9]\d*/minidump/
- Endpoint for submitting minidump from native SDKs/api/[1-9]\d*/security/
- Endpoint for submitting security-related such as CSP errors/api/[1-9]\d*/store/
- Old endpoint for submitting event from SDK, it is deprecated./api/[1-9]\d*/unreal/
- Endpoint for submitting crash report from Unreal Engine SDK
The [1-9]\d+
is a regular expression string that is acquired from the project DSN.
Rate Limiting
By default, Sentry does not handle rate limiting for any incoming request. On hosted version of Sentry (SaaS), it has a feature called spike protection which can protect you from event flood. The code for that module is not available on the public sentry repository, it lives on the private getsentry repository instead.
For self-hosted deployment, it is recommended to have a rate limiter on your dedicated load balancer to prevent such things to happen. It is highly recommended than ever if you expose your Sentry instance publicly to the internet.
Health Checks
Endpoint for health checks is available on /_health/
endpoint using HTTP protocol. This will return a 200 if Sentry is up or a 500 with the list of problems.
Reverse Proxy Examples
NGINX
We recommend installing NGINX since that's what we are using on sentry.io.
error_log /var/log/nginx/error.log warn;
# generated 2024-04-29, Mozilla Guideline v5.7, nginx 1.24.0, OpenSSL 3.0.13, modern configuration, no HSTS, no OCSP
# https://ssl-config.mozilla.org/#server=nginx&version=1.24.0&config=modern&openssl=3.0.13&hsts=false&ocsp=false&guideline=5.7
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
ssl_certificate /etc/letsencrypt/live/sentry.yourcompany.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/sentry.yourcompany.com/privkey.pem;
ssl_session_timeout 1d;
ssl_session_cache shared:MozSSL:10m; # about 40000 sessions
ssl_session_tickets off;
ssl_dhparam /etc/letsencrypt/ffdhe2048.txt;
# modern configuration
ssl_protocols TLSv1.3;
ssl_prefer_server_ciphers off;
proxy_buffering on;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
location / {
include proxy_params;
proxy_pass http://your-sentry-ip:9000;
}
}
server {
server_name sentry.yourcompany.com;
listen 80;
listen [::]:80;
root /var/www/html;
# Allow certbot to do http-01 challenges
location /.well-known/ {
try_files $uri =404;
}
# otherwise redirect to HTTPS
location / {
return 301 https://$host$request_uri;
}
}
To use NGINX with ACME server such as Let's Encrypt, refer to this blog post by Nginx.
It is also recommended to fine tune your NGINX for some performance benefits. You can refer to these blog posts from NGINX:
Caddy
Caddy is one alternative similar to NGINX that automatically handles TLS certificate management via ACME. After you install Caddy, modify your Caddy configuration file that reside on /etc/caddy/Caddyfile
.
sentry.yourcompany.com {
reverse_proxy your-sentry-ip:9000 {
health_uri /_health/
health_status 2xx
header_up Host {upstream_hostport}
}
# By default, the TLS is acquired from Let's Encrypt
tls name@yourcompany.com
# If you have self-signed certificate
# tls /path/to/server-certificate.crt /path/to/server-certificate.key
header {
# Delete "Server" header
-Server
}
# To enable rate limiter, install additional module from
# https://github.com/mholt/caddy-ratelimit
# rate_limit {
# zone sentry {
# key {remote_host}
# window 1s
# events 100
# }
# }
# To expose only ingest endpoint publicly, add the named matcher below before `reverse_proxy` directive
# @ingest_endpoint {
# path_regexp /api/[1-9]\d+(envelope|minidump|security|store|unreal)/
# }
}
For detailed documentation on Caddyfile configuration, see Caddy documentation.
Traefik
Traefik is another reverse proxy that provides a lot of plugin and integrations out of the box. It automatically handles TLS certificate management via ACME, too. After you install Traefik, add a configuration to Traefik as follows (this example is using the YAML file provider, convert to your prefered configuration provider as needed).
http:
routers:
sentry:
entryPoints:
- web # Assuming this your HTTP entrypoint
- websecure # Assuming this is your HTTPS entrypoint
service: sentry@file
rule: "Host(`sentry.yourcompany.com`)"
# If you want to expose only ingest endpoint publicly
# rule: "Host(`sentry.yourcompany.com`) && PathPrefix(`/api/{id:[1-9]\d*}/envelope`, `/api/{id:[1-9]\d*}/minidump`, `/api/{id:[1-9]\d*}/security`, `/api/{id:[1-9]\d*}/store`, `/api/{id:[1-9]\d*}/unreal`)"
tls:
certResolver: letsencrypt # Assuming you have a TLS certificate resolver named "letsencrypt"
# Enable middleware as needed
middlewares:
- https_redirect@file
- cors_headers@file # For handling browser clients
- rate_limiter@file
services:
sentry:
loadBalancer:
servers:
- url: "http://your-sentry-ip:9000"
healthCheck:
scheme: http
path: /_health/
interval: 30s
timeout: 10s
passHostHeader: true
middlewares:
https_redirect:
redirectScheme:
scheme: "https"
port: "443"
permanent: true
cors_headers:
headers:
customResponseHeaders:
# We can't remove header on Traefik, but we can put it to some other values
server: "Your Company Name"
addVaryHeader: true
# If you want to set this to true, adjust "accessControlAllowOriginList" to a valid domain and remove the asterisk wildcard
accessControlAllowCredentials: false
accessControlAllowOriginList:
- "*"
accessControlAllowHeaders:
- "sentry-trace"
- "baggage"
accessControlAllowMethods:
- GET
- POST
- PUT
- PATCH
- DELETE
accessControlExposeHeaders:
- "sentry-trace"
- "baggage"
sslRedirect: true
rate_limiter:
rateLimit:
average: 100
period: 1s
burst: 150
HAProxy
HAProxy is a high performance reverse proxy. This is the recommended reverse proxy to go when you have encountered network hiccups by using other reverse proxies due to its' performance. HAProxy requires external module to handles automatic TLS certificate management.
To install HAProxy, it is recommended to acquire it from your distribution package manager (apt or yum). See their official distribution repositories. Then, you should be able to configure your HAProxy configuration file that should be on /etc/haproxy/haproxy.cfg
.
global
# Your global configuration (may varies between version and Linux distributions)
defaults
mode http
log global
option httplog
option dontlognull
option forwardfor
option http-server-close
option http-keep-alive
timeout connect 10s # Connect timeout in 10s
timeout client 30s # Client timeout in 30s
timeout server 30s # Server timeout in 30s
timeout http-keep-alive 2m # HTTP keep alive in 2 minutes
# Your remaining defaults configuration
frontend http_bind
bind *:80 name http_port
mode http
acl sentry_domain hdr(host) -i sentry.yourcompany.com
# HTTPS redirection
http-request redirect scheme https code 301 if sentry_domain !{ ssl_fc }
use_backend sentry
frontend https_bind
bind *:443 ssl crt /etc/haproxy/certs/ name https_port
mode http
acl sentry_domain hdr(host) -i sentry.yourcompany.com
use_backend sentry if sentry_domain
# To expose only ingest endpoints publicly, add the acl below on `use_backend` directive
# acl ingest_endpoint path_reg -i /api/[1-9]\d+(envelope|minidump|security|store|unreal)/
# use_backend sentry if sentry_domain ingest_endpoint
backend sentry
mode http
option httpchk
server server1 your-sentry-ip:9000 check
To use HAProxy with ACME server such as Let's Encrypt, refer to this blog post by HAProxy.