RocksDB FlowFile Repository
The FlowFile repository keeps track of the attributes and current state of each FlowFile in the system. By default, this repository is installed in the same root installation directory as all the other repositories; however, it is advisable to configure it on a separate drive if available.
This implementation makes use of the RocksDB key-value store. It uses periodic
syncronization to ensure that no created or received data is lost (as long
as nifi.flowfile.repository.rocksdb.accept.data.loss
is set
false
). In the event of power loss, work done on
FlowFiles through the system (i.e. routing and transformation) may still
be lost. Specifically, the record of these actions may be lost, reverting
the affected FlowFiles to a previous, valid state. From there, they will
resume their path through the flow as normal. This guarantee comes at the
expense of a delay on operations that add new data to the system. This
delay is configurable (as nifi.flowfile.repository.rocksdb.sync.period
),
and can be tuned to the individual system.
The configuration parameters for this repository fall in to two categories, "NiFi-centric" and "RocksDB-centric". The NiFi-centric settings have to do with the operations of the FlowFile Repository and its interaction with NiFi. The RocksDB-centric settings directly correlate to settings on the underlying RocksDB repo. More information on these settings can be found in the RocksDB documentation: https://github.com/facebook/rocksdb/wiki/RocksJava-Basics.
Note: Windows users will need to install "Microsoft Visual C++ 2015 Redistributable" for this repository to work. See the following link for more details: https://github.com/facebook/rocksdb/wiki/RocksJava-Basics#maven-windows.
NiFi-centric Configuration Parameters:
Property |
Description |
Default Value |
|
The location of the FlowFile Repository. |
|
|
How often to log warnings if unable to sync. |
30 seconds |
|
How often to mark content claims destructible (so they can be removed from the content repo). |
30 seconds |
|
How many threads to use on startup restoring the FlowFile state |
16 |
|
Size of the buffer to use on startup restoring the FlowFile state |
1000 |
|
Frequency at which to force a sync to disk. This is the maximum period
a data creation operation may block if |
10 milliseconds |
|
Whether to accept the loss of received / created data. Setting this
|
false |
|
Whether to enable the stall / stop of writes to the repository based on configured limits. Enabling this feature allows the system to protect itself by restricting (delaying or denying) operations that increase the total FlowFile count on the node to prevent the system from being overwhelmed. |
false |
|
The period of time to stall when the specified criteria are encountered |
100 milliseconds |
|
The FlowFile count at which to begin stalling writes to the repo. |
800000 |
|
The heap usage at which to begin stalling writes to the repo. |
95% |
|
The FlowFile count at which to begin stopping the creation of new FlowFiles |
1100000 |
|
The heap usage at which to begin stopping the creation of new FlowFiles |
99.9% |
|
Whether to allow the repository to remove FlowFiles it cannot identify on startup. As this is often the result of a configuration or synchronization error, it is disabled by default. This should only be enabled if you are absolutely certain you want to lose the data in question. |
false |
|
Whether to enable "recovery mode". This limits the number of FlowFiles loaded into the graph at a time, while not actually removing any FlowFiles (or content) from the system. This allows for the recovery of a system that is encountering OutOfMemory errors or similar on startup. This should not be enabled unless necessary to recover a system, and should be disabled as soon as that has been accomplished. WARNING: While in recovery mode, do not make modifications to the graph. Changes to the graph may result in the inability to restore further FlowFiles from the repository. |
false |
|
The number of FlowFiles to load into the graph when in "recovery mode". As FlowFiles leave the system, additional FlowFiles will be loaded up to this limit. This setting does not prevent FlowFiles from coming into the system via normal means. |
5000 |
RocksDB-centric Configuration Parameters:
Property |
Description |
Default Value |
|
The number of threads to use for flush and compaction. A good value
is the number of cores. See RockDB |
8 |
|
The maximum number of write buffers that are built up in memory. See
RockDB |
4 |
|
The amount of data to build up in memory before converting to a sorted
on disk file. Larger values increase performance, especially during
bulk loads. Up to |
256 MB |
|
A soft limit on number of level-0 files. Writes are slowed at this
point. A values less than 0 means no write slow down will be triggered
by the number of files in level-0. See RocksDB |
20 |
|
The maximum number of level-0 files. Writes will be stopped at this
point. See RocksDB |
40 |
|
The limited write rate to the DB if slowdown is triggered. RocksDB
may decide to slow down more if the compaction gets behind further.
See RocksDB |
16 MB |
|
Specifies the maximum number of concurrent background flush jobs. See
RocksDB |
1 |
|
Specifies the maximum number of concurrent background compaction jobs.
See RocksDB |
1 |
|
The minimum number of write buffers to merge together before writing
to storage. See RocksDB |
1 |
|
The period at which to dump rocksdb.stats to the log. See RocksDB |
600 sec |