Search…
Logging
Logs are collected with Fluent Bit and are exported to CloudWatch.

Logs on AWS

Logs will automatically be pushed to CloudWatch and a log group with the same name as your cluster will be created to store your logs. API logs are tagged with labels to help with log aggregation and filtering. Log lines greater than 5 MB in size will be ignored.
You can use the cortex logs command to get a CloudWatch Insights URL of query to fetch logs for your API. Please note that there may be a few minutes of delay from when a message is logged to when it is available in CloudWatch Insights.
RealtimeAPI:
1
fields @timestamp, message
2
| filter cortex.labels.apiName="<INSERT API NAME>"
3
| filter cortex.labels.apiKind="RealtimeAPI"
4
| sort @timestamp asc
5
| limit 1000
Copied!
AsyncAPI:
1
fields @timestamp, message
2
| filter cortex.labels.apiName="<INSERT API NAME>"
3
| filter cortex.labels.apiKind="AsyncAPI"
4
| sort @timestamp asc
5
| limit 1000
Copied!
BatchAPI:
1
fields @timestamp, message
2
| filter cortex.labels.apiName="<INSERT API NAME>"
3
| filter cortex.labels.jobID="<INSERT JOB ID>"
4
| filter cortex.labels.apiKind="BatchAPI"
5
| sort @timestamp asc
6
| limit 1000
Copied!
TaskAPI:
1
fields @timestamp, message
2
| filter cortex.labels.apiName="<INSERT API NAME>"
3
| filter cortex.labels.jobID="<INSERT JOB ID>"
4
| filter cortex.labels.apiKind="TaskAPI"
5
| sort @timestamp asc
6
| limit 1000
Copied!

Streaming logs from the CLI

You can stream logs directly from a random pod of a running workload to iterate and debug quickly. These logs will not be as comprehensive as the logs that are available in CloudWatch.
1
# RealtimeAPI
2
cortex logs --random-pod <api_name>
3
4
# BatchAPI or TaskAPI
5
cortex logs --random-pod <api_name> <job_id> # the job must be in a running state
Copied!

Structured logging

If you log JSON strings from your APIs, they will be automatically parsed before pushing to CloudWatch.
It is recommended to configure your JSON logger to use message or msg as the key for the log line if you would like the sample queries above to display the messages in your logs.
Avoid using top-level keys that start with "cortex" to prevent collisions with Cortex's internal logging.

Exporting logs

You can export both the Cortex system logs and your application logs to your desired destination by configuring FluentBit.

Configure kubectl

Follow these instructions to set up kubectl.

Find supported destinations in FluentBit

Visit FluentBit's output docs to see a list supported destinations.
Make sure to navigate to the version of FluentBit being used in your cluster. You can find the version of FluentBit by looking at the first view lines of one of the FluentBit pod logs.
Get the FluentBit pods:
1
kubectl get pods --selector app=fluent-bit
Copied!
FluentBit's version should be in the first few log lines of a FluentBit pod:
1
kubectl logs fluent-bit-kxmzn | head -n 20
Copied!

Update FluentBit configuration

Define patch.yaml with your new output configuration:
1
data:
2
output.conf: |
3
[OUTPUT]
4
Name es
5
Match k8s_container.*
6
Host https://abc123.us-west-2.es.amazonaws.com
7
Port 443
8
AWS_Region us-west-2
9
AWS_Auth On
10
tls On
11
Logstash_Format On
12
Logstash_Prefix my-logs
Copied!
Update FluentBit's configuration:
1
kubectl patch configmap fluent-bit-config --patch-file patch.yaml --type merge
Copied!

Restart FluentBit

Restart FluentBit to apply the new configuration:
1
kubectl rollout restart daemonset/fluent-bit
Copied!
Last modified 2mo ago