[go: up one dir, main page]

Data pipeline

The Fluent Bit data pipeline incorporates several specific concepts. Data processing flows through the pipeline following these concepts in order.

Inputs

Input plugins gather information from different sources. Some plugins collect data from log files, and others gather metrics information from the operating system. There are many plugins to suit different needs.

Parser

Parsers convert unstructured data to structured data. Use a parser to set a structure to the incoming data by using input plugins as data is collected.

Filter

Filters let you alter the collected data before delivering it to a destination. In production environments you need full control of the data you're collecting. Using filters lets you control data before processing.

Buffer

The buffering phase in the pipeline aims to provide a unified and persistent mechanism to store your data, using the primary in-memory model or the file system-based mode.

Routing

Routing is a core feature that lets you route your data through filters, and then to one or multiple destinations. The router relies on the concept of tags and matching rules.

Output

Output plugins let you define destinations for your data. Common destinations are remote services, local file systems, or other standard interfaces.

Last updated

Was this helpful?