Setting up Centralized Logging with Auditd

In this post, I will talk about how to set up centralized logging using the Auditd daemon, and the audisp-remote plugin.

Auditd is the Linux Audit daemon which is responsible for logging events that happen based on the rules defined. The Auditd daemon passes the event records to the audit dispatcher, called audisp. The audit dispatcher can either send these records to the local file system, or to a remote server.


Managing multiple servers that each log to a local file can be challenging – accessing them can be troublesome, and a single log file may not shed insights to what is happening to your entire system. Centralized logging, whereby logs from multiple sources are consolidated at a single location, can help manage these servers better.

Software Version

I am using clean installations of Cent OS 7 (64 bit) Minimal of release 1511.

For auditd, I am using package audit-2.4.1-5.el7.


I have a three VMs set up. The intention is to have one VM as the designated centralized logging server, while the other VMs log remotely to it.


I assume that:

  • the VMs are already set up,
  • all three VMs are on the same network with access to the Internet,
  • you have all the necessary credentials to install packages and issue other commands that require root privileges, and
  • it is our intention to log all commands executed.

Step 1: Installing Auditd and Audispd-plugins

Auditd should come pre-installed with the above mentioned Cent OS release. In any case it is not, you can do so by issuing the following command:

$ sudo yum install audit

To send audit records to the Centralized Log Server, plugins for audit dispatcher (audisp) needs to be installed on the remote servers. You can do so by issuing the following command:

$ sudo yum install audispd-plugins

Here is a brief summary extracted from the debian package site: “The audispd-plugins package provides plugins for the real-time interface to the audit system, audispd. These plugins can do things like relay events to remote machines or analyze events for suspicious behavior.”

Step 2: Configure the Centralized Log Server

On the Centralized Log Server, we need to tell the auditd daemon to listen to a particular port for remote audit records. Let’s use default port 60 for this purpose.

First, open the port on the OS. You can use iptables, or the firewall-cmd client to help you. Below are the commands for the latter.

$ sudo yum install firewalld
$ sudo service firewalld start
$ sudo firewall-cmd --zone=public --add-port=60/tcp --permanent

Next, open the configuration file for the auditd daemon on the Centralized Log Server using your favourite text editor.

$ sudo vi /etc/audit/auditd.conf

Uncomment the line that says “tcp_listen_port = “ and enter the chosen port for remote logging to take place on.

tcp_listen_port = 60

Lastly, save the edits and restart auditd

$ sudo service auditd restart

Step 3: Configure the Remote Servers

For each remote server, edit the audit dispatcher remote plugin configuration file to specify the hostname or IP address of the Centralized Log Server, as well as the port to send audit records to.

Open the audit dispatcher remote logging plugin configuration file and specify the Centralized Log Server IP address (or hostname) and port number that it listens on:

$ vi /etc/audisp/audisp-remote.conf
remote_server = <IP Address of Centralized Log Server>
port = 60

Next, enable the remote logging plugin:

$ vi /etc/audisp/plugins.d/au-remote.conf
active = yes

By default, auditd will log all audit records locally. As you have set up remote logging, you can optionally turn off remote logging by opening the auditd configuration file and setting the log_format value to “NOLOG”

$ sudo vi /etc/audit/auditd.conf
log_format = NOLOG

Lastly, restart auditd to enable the changes.

$ sudo service auditd restart

Congratulations! At this stage, you already have your remote servers logging to your Centralized Log Server.

Step 4: Adding Rules To Log All

Audit records are generated based on the rules defined in auditd. You can modify rules while auditd is running, or add them in the audit.rules drop in file.

In this example, we will want to log all commands that are executed (as mentioned in Open the audit.rules drop in file and add the following two lines at the bottom:

$ vi /etc/audit/rules.d/audit.rules
-a exit,always -F arch=b64 -S execve
-a exit,always -F arch=b32 -S execve

Restart auditd to have the new rules enforced:

$ sudo service auditd restart

Go ahead and run some commands in your remote servers (e.g. creating a file, or calling sudo) and see the audit records being populated in the Centralized Log Server’s log file.



  • Opening the log file (e.g. /var/log/audit/audit.log) with a text editor like vi causes auditd to stop adding audit records to it. If you wish to see what is inside, do use commands similar to tail or cat
  • If you have opened the file and audit records are not being added, just restart the auditd process

14 thoughts on “Setting up Centralized Logging with Auditd

  1. This was really helpful thank you!

    Do you have any tips on getting it to run with krb5 enabled? I have enable_krb5 set to yes on the clinet, but the message the log server gets no longer have the same information:
    node=dr type=DAEMON_ACCEPT msg=audit(1525377669.194:2772): addr= port=56990 res=success
    node=dr type=DAEMON_CLOSE msg=audit(1525377669.195:2773): addr= port=56990 res=success


    1. Hello – thank you for your comment. I appreciate your time spent reading this post.
      Unfortunately, I am not very familiar with Kerberos at the moment.


  2. I have no idea why it doesn’t work, but on the centralized log server when I run netstat -nlt, port 60tcp should be opened and in LISTEN state. however, I don’t see it at all.


    1. Thanks for your comment. It may be possible that the changes to the firewall rules were not being applied to the daemon. After opening the ports with the firewall-cmd command, try reloading the daemon by running the following command:

      # firewall-cmd --reload


  3. Very helpful. One thing I’m trying to understand though is that messages I am seeing in the audit.log files can be interleaved with messages from another event. I.e. I get three messages in a row for event 3450, then one message for 3449, then another 3450, then two more for 3449. Is there some mechanism, and I was hoping audisp would have this built in, to collate these events before processing? Or must I do my own collation?


    1. If you are looking at using the audit dispatcher (audisp) to consolidate the logs, you can check out the “mode” setting in the audisp-remote.conf. It accepts two values: immediate (default), or forward – you can read more about each option here:

      However, I suspect that even setting it to “forward”, your logs may still be interleaved due to many external factors, such as network latency or process/thread scheduling on the OS. If you are open to another method, might I suggest you try the “sort” command? For example, to sort a file based on a numeric key in the first column:

      $ sort -k1,1n (file-name-to-sort)

      The sort command will return the sorted results on the stdin, so you can redirect it to a new file, or pipe it to grep to find the event of interest.


      1. Thanks. But what I am trying to do is to process these messages as close to real-time as possible, to glean a limited amount of information from each event “set” of messages for near-real-time analysis. A delay of a few milliseconds or whatever is not as important as having the entire event set available to be processed in one chunk. I suppose I can accumulate messages and after a short interval sort them, leaving a trailing gap and reprocessing that in the next batch, IYKWIM. But that seems ridiculously clunky and unnecessarily risky. I am aware that Elasticsearch has a tool that does this but that tool appears to have more overhead than I would want to take on. Might be better to dig into auditd source itself…but that’s another time sink. In general, do you know if there is a good book or other reference on the depths of auditd? Some sort of definitive guide? I’m not finding much googling, it’s mostly surface stuff that addresses most of what I need to know but I’m left with a feeling that there is more. Thanks for any info.


      2. Yes, tools like ElasticSearch are easy to implement, but the common downside is that it may not be the most efficient or elegant solution (e.g. overheads, coarse-grained approach).

        Off the top of my head, another 2 methods might be:

        1. Have individual hosts store their own logs, and have a nifty shell script to basically do batching/processing on events that have already completed, or
        2. Use ausearch to extract logs that have the same process ID (Something like this?

        With regards to your question on reference books for auditd, I believe most Linux generic/distro reference guides will include it as a subsection. I recall I was also struggling to putting things together when I was setting up auditd – therefore I created this post to help others. 🙂

        Hope it helps! 🙂


  4. Great info thanks. If you have problems getting the audisp talking to remote hosts and central logger, and you use tcp_wrappers, you will need to add this to your hosts.allow file

    auditd: <—- IP address

    Also check if it uses tcp_wrapper by looking at the auditd.conf for use_libwrap = yes


  5. worth mentioning that after audit records will start appearing in the centralized log server, to query for events related to a specific node, we can use the –node flag. for example to view login events on server “orangehat”:

    ausearch --input-logs -ts today -te now -m USER_LOGIN -sv yes --node orangehat -i



  6. How can I get audit.log file separetly to Centralized Log server?
    Right now it pushes audit entries inside syslog from that remote server.

    It would be nice to have, i know, but makes later search much easier.


    1. Hi Mario – if you have audit.log files in your servers, and you do not want to use auditd, you can consider other synchronization mechanisms like rsync, or a cron‘ed scp job


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s