Query Examples
This guide provides practical examples for common log analytics scenarios using LogChef. Each example includes both the LogchefQL syntax and the equivalent SQL query.
Error Analysis
Finding All Errors
Find all errors across all services to get an overview of system health.
level="error"SQL Equivalent
SELECT *FROM logs.appWHERE level = 'error'ORDER BY timestamp DESCLIMIT 100Service-specific Errors
Narrow down errors to a specific service when troubleshooting issues in that component.
level="error" and service="payment-api"SQL Equivalent
SELECT *FROM logs.appWHERE level = 'error' AND service = 'payment-api'ORDER BY timestamp DESCLIMIT 100Errors Excluding Debug Noise
Find errors while excluding specific patterns that aren’t relevant.
level="error" and message!~"health check"SQL Equivalent
SELECT *FROM logs.appWHERE level = 'error' AND positionCaseInsensitive(message, 'health check') = 0ORDER BY timestamp DESCLIMIT 100Critical vs Warning Analysis
Compare different severity levels.
severity_number>=4 or level="critical"SQL Equivalent
SELECT *FROM logs.appWHERE severity_number >= 4 OR level = 'critical'ORDER BY timestamp DESCLIMIT 100HTTP Logs Analysis
Server Errors (5xx Status Codes)
Identify all server-side errors to find potential backend issues.
status>=500SQL Equivalent
SELECT *FROM logs.appWHERE status >= 500ORDER BY timestamp DESCLIMIT 100Slow API Requests
Find API requests that took longer than 1 second to complete, which may indicate performance bottlenecks.
request_path~"/api/" and response_time>1000SQL Equivalent
SELECT *FROM logs.appWHERE positionCaseInsensitive(request_path, '/api/') > 0 AND response_time > 1000ORDER BY timestamp DESCLIMIT 100Client Errors for a Specific Endpoint
Find client errors (4xx) for a specific API endpoint to identify potential client integration issues.
status>=400 and status<500 and request_path~"/api/payments"SQL Equivalent
SELECT *FROM logs.appWHERE status >= 400 AND status < 500 AND positionCaseInsensitive(request_path, '/api/payments') > 0ORDER BY timestamp DESCLIMIT 100Request Latency Analysis
Find requests within specific latency ranges.
# Very slow requests (over 5 seconds)response_time>5000
# Fast requests (under 100ms)response_time<100
# Requests in a specific rangeresponse_time>=100 and response_time<=500Nested Field Queries
LogchefQL supports querying nested fields in Map and JSON columns using dot notation.
Map Column Access
Query logs by attributes stored in Map columns (common in OpenTelemetry logs).
# Filter by user ID in attributeslog_attributes.user_id="user-12345"SQL Equivalent
SELECT *FROM logs.appWHERE log_attributes['user_id'] = 'user-12345'ORDER BY timestamp DESCLIMIT 100Multi-level Nesting
Access deeply nested fields.
# Query nested request attributeslog_attributes.http.request.method="POST"
# Query nested error detailslog_attributes.error.code="CONNECTION_REFUSED"SQL Equivalent
SELECT *FROM logs.appWHERE log_attributes['http.request.method'] = 'POST'ORDER BY timestamp DESCLIMIT 100Pattern Matching in Nested Fields
Use contains operator on nested values.
log_attributes.request.url~"/api/v2/"SQL Equivalent
SELECT *FROM logs.appWHERE positionCaseInsensitive(log_attributes['request.url'], '/api/v2/') > 0ORDER BY timestamp DESCLIMIT 100JSON Column Extraction
For JSON or String columns containing JSON, LogchefQL uses JSONExtractString.
body.request.user_agent~"Mozilla"SQL Equivalent
SELECT *FROM logs.appWHERE positionCaseInsensitive(JSONExtractString(body, 'request', 'user_agent'), 'Mozilla') > 0ORDER BY timestamp DESCLIMIT 100Quoted Field Names
For field names containing dots or special characters.
# Field name literally contains a dotlog_attributes."service.name"="payment-api"
# Mixed quoted and unquotedlog_attributes."nested.key".subfield="value"Using the Pipe Operator
The pipe operator (|) lets you select specific columns instead of SELECT *.
Basic Column Selection
Select only the fields you need.
level="error" | timestamp service level messageSQL Equivalent
SELECT timestamp, service, level, messageFROM logs.appWHERE level = 'error'ORDER BY timestamp DESCLIMIT 100Extracting Nested Values
Pull specific values from nested structures.
namespace="prod" | timestamp log_attributes.user_id log_attributes.request_id bodySQL Equivalent
SELECT timestamp, log_attributes['user_id'] AS log_attributes_user_id, log_attributes['request_id'] AS log_attributes_request_id, bodyFROM logs.appWHERE namespace = 'prod'ORDER BY timestamp DESCLIMIT 100Minimal Output for Scanning
When you just need to scan for specific patterns.
message~"error" | timestamp messageService Overview
Get a quick view of service activity.
namespace="production" | timestamp service_name levelSecurity Analysis
Failed Authentication Attempts
Identify potential brute force attacks by finding multiple failed login attempts.
event="login_failed" and ip_address~"192.168."SQL Equivalent
SELECT *FROM logs.appWHERE event = 'login_failed' AND positionCaseInsensitive(ip_address, '192.168.') > 0ORDER BY timestamp DESCLIMIT 100Suspicious Activity Detection
Find logs that might indicate suspicious activities based on warning messages.
level="warn" and (message~"suspicious" or message~"unauthorized")SQL Equivalent
SELECT *FROM logs.appWHERE level = 'warn' AND ( positionCaseInsensitive(message, 'suspicious') > 0 OR positionCaseInsensitive(message, 'unauthorized') > 0 )ORDER BY timestamp DESCLIMIT 100Access Pattern Analysis
Track access to sensitive endpoints.
request_path~"/admin" or request_path~"/api/internal"System Monitoring
High Resource Usage
Detect potential resource bottlenecks by finding instances of high CPU or memory usage.
type="system_metrics" and (cpu_usage>90 or memory_usage>85)SQL Equivalent
SELECT *FROM logs.appWHERE type = 'system_metrics' AND (cpu_usage > 90 OR memory_usage > 85)ORDER BY timestamp DESCLIMIT 100Failed Service Health Checks
Monitor service health by finding instances where health checks have failed.
event="health_check" and status!="ok"SQL Equivalent
SELECT *FROM logs.appWHERE event = 'health_check' AND status != 'ok'ORDER BY timestamp DESCLIMIT 100Disk Space Warnings
Identify servers that are running low on disk space and might need attention.
type="system_metrics" and disk_free_percent<15SQL Equivalent
SELECT *FROM logs.appWHERE type = 'system_metrics' AND disk_free_percent < 15ORDER BY timestamp DESCLIMIT 100Distributed Tracing
Complete Request Trace
Trace a complete request flow across multiple services using a trace ID.
trace_id="abc123def456"SQL Equivalent
SELECT *FROM logs.appWHERE trace_id = 'abc123def456'ORDER BY timestamp ASCLIMIT 1000Service Dependency Analysis
Find all the services involved in a specific transaction to understand service dependencies.
trace_id="abc123def456" and level="info" and event="service_call"SQL Equivalent
SELECT service, remote_service, timestampFROM logs.appWHERE trace_id = 'abc123def456' AND level = 'info' AND event = 'service_call'ORDER BY timestamp ASCLIMIT 100Trace with Specific Fields
Get a focused view of a trace with only relevant fields.
trace_id="abc123def456" | timestamp service_name span_id bodyOpenTelemetry Log Queries
LogchefQL works great with OpenTelemetry log data.
Filter by Resource Attributes
log_attributes.service.name="frontend" and severity_text="ERROR"Kubernetes Context
log_attributes.k8s.namespace.name="production" and log_attributes.k8s.pod.name~"api-"Span Correlation
trace_id!="" and span_id!="" and level="error"Effective Query Tips
-
Start Specific, Then Broaden
- Begin with specific conditions that target your issue
- Add or remove filters to adjust the result set size
-
Use Comparison Operators for Metrics
response_time>1000is cleaner than text matching- Works well with numeric fields like status codes, durations, counts
-
Leverage Nested Field Access
- Query Map and JSON columns directly:
log_attributes.user_id="123" - No need to flatten your log schema
- Query Map and JSON columns directly:
-
Use the Pipe Operator for Focus
level="error" | timestamp service messagereduces noise- Faster queries when you don’t need all columns
-
Combine Multiple Conditions
- Use
andto narrow results - Use
orto broaden results - Use parentheses for complex conditions:
(condition1 or condition2) and condition3
- Use
-
Filter by Context First
- Start with service, component, or environment
- Then add conditions for errors, warnings, or specific events
- Finally, add free-text search terms with the
~operator
-
Switch to SQL Mode for Aggregations
- LogchefQL is for filtering; use SQL mode for
COUNT,GROUP BY, etc.
- LogchefQL is for filtering; use SQL mode for