Skip to content

Search Syntax

Logchef provides a simple yet powerful search syntax called LogchefQL that makes it easy to find exactly what you’re looking for in your logs.

The basic syntax follows a simple key-value pattern:

key="value"

For example:

level="error"
service="payment-api"

LogchefQL supports a comprehensive set of operators for different use cases:

OperatorDescriptionExample
=Equalsstatus=200
!=Not equalslevel!="debug"
OperatorDescriptionExample
~Contains (case-insensitive)message~"timeout"
!~Does not containpath!~"health"
OperatorDescriptionExample
>Greater thanstatus>400
<Less thanresponse_time<1000
>=Greater than or equal toseverity_number>=3
<=Less than or equal toduration<=5000

You can combine multiple conditions using and and or operators (case-insensitive):

# Find errors in payment service
level="error" and service="payment-api"
# Find successful or redirected responses
status=200 or status=301
# Complex combinations with parentheses
(service="auth" or service="users") and level="error"
# Find slow requests with errors
response_time>1000 and status>=500

LogchefQL supports accessing nested fields using dot notation. This works seamlessly with:

  • Map columns (e.g., Map(String, String))
  • JSON columns
  • String columns containing JSON

Access nested keys using dots:

# Access nested field in a Map column
log_attributes.user_id="12345"
# Multi-level nesting
log_attributes.request.method="POST"
# Pattern matching on nested fields
log_attributes.error.message~"connection refused"

For field names containing special characters (like dots), use quotes:

# Field name contains a literal dot
log_attributes."user.name"="alice"
# Mixed notation
log_attributes."nested.key".subfield="value"

The pipe operator (|) allows you to select specific columns instead of the default SELECT *. This is useful for:

  • Reducing data transfer
  • Focusing on relevant fields
  • Extracting specific nested values
<filter conditions> | <field1> <field2> ...
# Select only service_name from syslog namespace
namespace="syslog" | service_name
# Select multiple fields
namespace="prod" | namespace service_name body
# Select nested fields
level="error" | timestamp log_attributes.request_id body
# Extract specific nested values
service="api" | timestamp log_attributes.user_id log_attributes.endpoint

The selected fields appear as columns in the result set, making it easy to focus on the data you need.

level="error"
# Server errors
status>=500
# Client errors
status>=400 and status<500
# Successful requests
status=200
service="payment-api" and level="error"
# Find logs containing "timeout"
message~"timeout"
# Find paths not containing "internal"
path!~"internal"
# Slow requests (over 1 second)
response_time>1000
# Very fast requests (under 10ms)
response_time<10
# Filter by nested attribute
log_attributes.user_id="user-123"
# Pattern match in nested JSON
body~"error" and log_attributes.request.method="POST"

When you use LogchefQL, Logchef converts it to optimized ClickHouse SQL queries:

  • The ~ and !~ operators use ClickHouse’s positionCaseInsensitive function for efficient partial matches
  • Nested field access on Map columns uses subscript notation: column['key']
  • Nested field access on JSON/String columns uses JSONExtractString
  • A default time range and limit is automatically applied
  • Results are ordered by timestamp in descending order

For example, this search:

level="error" and log_attributes.user_id="12345"

Gets converted to (for a table with log_attributes as a Map column):

SELECT *
FROM logs.app
WHERE (`level` = 'error') AND (`log_attributes`['user_id'] = '12345')
AND timestamp BETWEEN toDateTime('2025-04-07 14:20:42', 'UTC') AND toDateTime('2025-04-07 14:25:42', 'UTC')
ORDER BY timestamp DESC
LIMIT 100

And with the pipe operator:

level="error" | timestamp level body

Generates:

SELECT `timestamp`, `level`, `body`
FROM logs.app
WHERE `level` = 'error'
AND timestamp BETWEEN toDateTime('2025-04-07 14:20:42', 'UTC') AND toDateTime('2025-04-07 14:25:42', 'UTC')
ORDER BY timestamp DESC
LIMIT 100

For advanced queries that go beyond LogchefQL’s capabilities, you can switch to SQL Mode in the query editor. This gives you full access to ClickHouse SQL, including:

  • Aggregations (COUNT, SUM, AVG, etc.)
  • Subqueries
  • Joins
  • Custom functions
  • Complex expressions

In SQL mode, your query is executed exactly as written—time range and limit controls are disabled since you have full control over the SQL.

  1. Start specific, then broaden: Begin with specific conditions and remove filters to expand results
  2. Use parentheses for complex logic: (a or b) and c is clearer than relying on operator precedence
  3. Leverage nested field access: Don’t flatten your logs—query them directly
  4. Use the pipe operator: Select only the fields you need for faster queries
  5. Switch to SQL mode: For aggregations and advanced analysis