Skip to content


This topic shows how to use and configure logging (log4j) in Flink applications.

Logging configuration

Local mode

In local mode, for example when running your application from an IDE, you can configure log4j as usual, i.e. by making a available in the classpath. An easy way in maven is to create in the src/main/resources folder. Here is an example:

log4j.rootLogger=INFO, console

# patterns:
#  d = date
#  c = class
#  F = file
#  p = priority (INFO, WARN, etc)
#  x = NDC (nested diagnostic context) associated with the thread that generated the logging event
#  m = message

# Log all infos in the console
log4j.appender.console.layout.ConversionPattern=%d{dd/MM/yyyy HH:mm:ss.SSS} %5p [%-10c] %m%n

# Log all infos in flink-app.log
log4j.appender.file.layout.ConversionPattern=%d{dd/MM/yyyy HH:mm:ss.SSS} %5p [%-10c] %m%n

# suppress info messages from flink

Standalone mode

In standalone mode, the actual configuration used is not the one in your jar file. This is because Flink has it own configuration files, which take precedence over your own.

Default files: Flink ships with the following default properties files:

  • Used by the Flink command line client (e.g. flink run) (not code executed on the cluster)
  • Used by the Flink command line client when starting a YARN session (
  • JobManager/Taskmanager logs (both standalone and YARN) Note that ${log.file} default to flink/log. It can be overridden in flink-conf.yaml, by setting env.log.dir,

env.log.dir defines the directory where the Flink logs are saved. It has to be an absolute path.

Log location: the logs are local, i.e. they are produced in the machine(s) running the JobManager(s) / Taskmanager(s).

Yarn: when running Flink on Yarn, you have to rely on the logging capabilities of Hadoop YARN. The most useful feature for that is the YARN log aggregation. To enable it, set the yarn.log-aggregation-enable property to true in the yarn-site.xml file. Once that is enabled, you can retrieve all log files of a (failed) YARN session using:

yarn logs -applicationId <application ID>

Unfortunately, logs are available only after a session stopped running, for example after a failure.

Yarn does not by default aggregate logs before an application finishes, which can be problematic with streaming jobs that don't even terminate.

A workaround is to use rsyslog, which is available on most linux machines.

First, allow incoming udp requests by uncommenting the following lines in /etc/rsyslog.conf:

$ModLoad imudp
$UDPServerRun 514

Edit your (see the other examples on this page) to use SyslogAppender:

log4j.rootLogger=INFO, file

# TODO: change package logtest to your package
log4j.logger.logtest=INFO, SYSLOG

# Log all infos in the given file
log4j.appender.file.layout.ConversionPattern=bbdata: %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p %-60c %x - %m%n

# suppress the irrelevant (wrong) warnings from the netty channel handler, file

# rsyslog
# configure Syslog facility SYSLOG appender
# TODO: replace host and myTag by your own
log4j.appender.SYSLOG.layout.conversionPattern=myTag: [%p] %c:%L - %m %throwable %n

The layout is important, because rsyslog treats a newline as a new log entry. Above, newlines (in stacktraces for example) will be skipped. If you really want multiline/tabbed logs to work "normally", edit rsyslog.conf and add:

$EscapeControlCharactersOnReceive off

The use of myTag: at the beginning of the conversionPattern is useful if you want to redirect all your logs into a specific file. To do that, edit rsyslog.conf and add the following rule:

if $programname == 'myTag' then /var/log/my-app.log
& stop

Finally, you can test your rsyslog setup using:

logger -t "myTag" "my message"

Using a logger in your code

Add the slf4j dependency to your pom.xml:


<!-- ... --> 


Create a logger object for use in your class:

private Logger LOGGER = LoggerFactory.getLogger(FlinkApp.class);

In classes that need to be serialized, such as subclasses of RichMapFunction, don't forget to declare LOGGER as transient:

private transient Logger LOG = LoggerFactory.getLogger(MyRichMapper.class);

In your code, use LOGGER as usual. Use placeholders ({}) to format objects and such:"my app is starting");
LOGGER.warn("an exception occurred processing {}", record, exception);

Using different configuration(s) for each application

In case you need different settings for your various applications, there is (as of Flink 1.2) no easy way to do that.

If you use the one-yarn-cluster-per-job mode of flink (i.e. you launch your scripts with: flink run -m yarn-cluster ...), here is a workaround :

  1. create a conf directory somewhere near your project
  2. create symlinks for all files in flink/conf:

     mkdir conf
     cd conf
     ln -s flink/conf/* .
  3. replace the symlink (or any other file you want to change) by your own configuration

  4. before launching your job, run
    export FLINK_CONF_DIR=/path/to/my/conf

Depending on your version of flink, you might need to edit the file flink/bin/ If your run accross this line:


change it with:

if [ -z "$FLINK_CONF_DIR" ]; then