Provided by: sec_2.9.0-1_all
NAME
sec - simple event correlator
SYNOPSIS
sec [--conf=<file pattern> ...] [--input=<file pattern>[=<context>] ...] [--input-timeout=<input timeout>] [--timeout-script=<timeout script>] [--reopen-timeout=<reopen timeout>] [--check-timeout=<check timeout>] [--poll-timeout=<poll timeout>] [--socket-timeout=<socket timeout>] [--blocksize=<io block size>] [--bufsize=<input buffer size>] [--evstoresize=<event store size>] [--cleantime=<clean time>] [--log=<logfile>] [--syslog=<facility>] [--debug=<debuglevel>] [--pid=<pidfile>] [--dump=<dumpfile>] [--user=<username>] [--group=<groupname> ...] [--umask=<mode>] [--ruleperf | --noruleperf] [--dumpfts | --nodumpfts] [--dumpfjson | --nodumpfjson] [--quoting | --noquoting] [--tail | --notail] [--fromstart | --nofromstart] [--detach | --nodetach] [--jointbuf | --nojointbuf] [--keepopen | --nokeepopen] [--rwfifo | --norwfifo] [--childterm | --nochildterm] [--intevents | --nointevents] [--intcontexts | --nointcontexts] [--testonly | --notestonly] [--help] [-?] [--version]
DESCRIPTION
SEC is an event correlation tool for advanced event processing which can be harnessed for event log monitoring, for network and security management, for fraud detection, and for any other task which involves event correlation. Event correlation is a procedure where a stream of events is processed, in order to detect (and act on) certain event groups that occur within predefined time windows. Unlike many other event correlation products which are heavyweight solutions, SEC is a lightweight and platform-independent event correlator which runs as a single process. The user can start it as a daemon, employ it in shell pipelines, execute it interactively in a terminal, run many SEC processes simultaneously for different tasks, and use it in a wide variety of other ways. SEC reads lines from files, named pipes, or standard input, matches the lines with patterns (regular expressions, Perl subroutines, etc.) for recognizing input events, and correlates events according to the rules in its configuration file(s). Rules are matched against input in the order they are given in the configuration file. If there are two or more configuration files, rule sequence from every file is matched against input (unless explicitly specified otherwise). SEC can produce output by executing external programs (e.g., snmptrap(1) or mail(1)), by writing to files, by sending data to TCP and UDP based servers, by calling precompiled Perl subroutines, etc. SEC can be run in various ways. For example, the following command line starts it as a daemon, in order to monitor events appended to the /var/log/messages syslog file with rules from /etc/sec/syslog.rules: /usr/bin/sec --detach --conf=/etc/sec/syslog.rules \ --input=/var/log/messages Each time /var/log/messages is rotated, a new instance of /var/log/messages is opened and processed from the beginning. The following command line runs SEC in a shell pipeline, configuring it to process lines from standard input, and to exit when the /usr/bin/nc tool closes its standard output and exits: /usr/bin/nc -l 8080 | /usr/bin/sec --notail --input=- \ --conf=/etc/sec/my.conf Some SEC rules start event correlation operations, while other rules react immediately to input events or system clock. For example, suppose that SEC has been started with the following command line /usr/bin/sec --conf=/etc/sec/sshd.rules --input=/var/log/secure in order to monitor the /var/log/secure syslog file for sshd events. Also, suppose that the /etc/sec/sshd.rules configuration file contains the following rule for correlating SSH failed login syslog events: type=SingleWithThreshold ptype=RegExp pattern=sshd\[\d+\]: Failed .+ for (\S+) from [\d.]+ port \d+ ssh2 desc=Three SSH login failures within 1m for user $1 action=pipe '%s' /bin/mail -s 'SSH login alert' root@localhost window=60 thresh=3 The pattern field of the rule defines the pattern for recognizing input events, while the ptype field defines its type (regular expression). Suppose that user risto fails to log in over SSH and the following message is logged to /var/log/secure: Dec 16 16:24:59 myserver sshd[13685]: Failed password for risto from 10.12.2.5 port 41063 ssh2 This input message will match the regular expression pattern of the above rule, and the match variable $1 will be set to the string risto (see perlre(1) for details). After a match, SEC will evaluate the operation description string given with the desc field. This is done by substituting $1 with its current value which yields Three SSH login failures within 1m for user risto. SEC will then check if there already exists an event correlation operation identified with this string and triggered by the same rule. If the operation is not found, SEC will create a new operation for the user name risto, and the occurrence time of the input event will be recorded into the operation. Note that for event occurrence time SEC always uses the current time as returned by the time(2) system call, *not* the timestamp extracted from the event. Suppose that after 25 seconds, a similar SSH login failure event for the same user name is observed. In this case, a running operation will be found for the operation description string Three SSH login failures within 1m for user risto, and the occurrence time of the second event is recorded into the operation. If after 30 seconds a third event for the user name risto is observed, the operation has processed 3 events within 55 seconds. Since the threshold condition "3 events within 60 seconds" (as defined by the thresh and window fields) is now satisfied, SEC will execute the action defined with the action field -- it will fork a command /bin/mail -s 'SSH login alert' root@localhost with a pipe connected to its standard input. Then, SEC writes the operation description string Three SSH login failures within 1m for user risto (held by the %s special variable) to the standard input of the command through the pipe. In other words, an e-mail warning is sent to the local root-user. Finally, since there are 5 seconds left until the end of the event correlation window, the operation will consume the following SSH login failure events for user risto without any further action, and finish after 5 seconds. The above example illustrates that the desc field of a rule defines the scope of event correlation and influences the number of operations created by the rule. For example, if we set the desc field to Three SSH login failures within 1m, the root-user would be also alerted on 3 SSH login failure events for *different* users within 1 minute. In order to avoid clashes between operations started by different rules, operation ID contains not only the value set by the desc field, but also the rule file name and the rule number inside the file. For example, if the rule file /etc/sec/sshd.rules contains one rule type=SingleWithThreshold ptype=RegExp pattern=sshd\[\d+\]: Failed .+ for (\S+) from [\d.]+ port \d+ ssh2 desc=Three SSH login failures within 1m for user $1 action=pipe '%s' /bin/mail -s 'SSH login alert' root@localhost window=60 thresh=3 and the event Dec 16 16:24:59 myserver sshd[13685]: Failed password for risto from 10.12.2.5 port 41063 ssh2 is the first matching event for the above rule, this event will trigger a new event correlation operation with the ID /etc/sec/sshd.rules | 0 | Three SSH login failures within 1m for user risto (0 is the number assigned to the first rule in the file, see EVENT CORRELATION OPERATIONS section for more information). The following simple example demonstrates that event correlation schemes can be defined by combining several rules. In this example, two rules harness contexts and synthetic events for achieving their goal: type=SingleWithThreshold ptype=RegExp pattern=sshd\[\d+\]: Failed .+ for (\S+) from [\d.]+ port \d+ ssh2 desc=Three SSH login failures within 1m for user $1 action=event 3_SSH_LOGIN_FAILURES_FOR_$1 window=60 thresh=3 type=EventGroup ptype=RegExp pattern=3_SSH_LOGIN_FAILURES_FOR_(\S+) context=!USER_$1_COUNTED && !COUNTING_OFF count=create USER_$1_COUNTED 60 desc=Repeated SSH login failures for 30 distinct users within 1m action=pipe '%s' /bin/mail -s 'SSH login alert' root@localhost; \ create COUNTING_OFF 3600 window=60 thresh=30 The first rule looks almost identical to the rule from the previous example, but its action field is different -- after three SSH login failures have been observed for the same user name within one minute by an event correlation operation, the operation will emit the synthetic event 3_SSH_LOGIN_FAILURES_FOR_<username>. Although synthetic events are created by SEC, they are treated like regular events received from input sources and are matched against rules. The regular expression pattern of the second rule will match the 3_SSH_LOGIN_FAILURES_FOR_<username> event and start a new event correlation operation if no such events have been previously seen. Also, each time a synthetic event for some user name has matched the rule, a context with the lifetime of 1 minute for that user name is created (see the count field). Note that this prevents further matches for the same user name, since a synthetic event for <username> can match the rule only if the context USER_<username>_COUNTED *does not* exist (as requested by the boolean expression in the context field; see CONTEXTS AND CONTEXT EXPRESSIONS section for more information). The operation started by the rule sends an e-mail warning to the local root-user if 30 synthetic events have been observed within 1 minute (see the thresh and window fields). Note that due to the use of the USER_<username>_COUNTED contexts, all synthetic events concern different user names. After sending an e-mail warning, the operation will also create the context COUNTING_OFF with the lifetime of 1 hour, and will continue to run until the 1 minute event correlation window expires. After the operation has finished, the presence of the COUNTING_OFF context will keep the second rule disabled (as requested by the boolean expression in the context field). Therefore, at most one e-mail warning per 1 hour is issued by above rules. The above examples have presented the event correlation capabilities of SEC in a very brief fashion. The following sections will provide an in-depth discussion of SEC features.
OPTIONS
--conf=<file_pattern> expand <file_pattern> to filenames (with the Perl glob() function) and read event correlation rules from every file. Multiple --conf options can be specified at command line. Each time SEC receives a signal that forces a configuration reload, <file_pattern> is re-evaluated. See also INPUT PROCESSING AND TIMING section for a discussion on rule processing order for multiple configuration files. --input=<file_pattern>[=<context>] expand <file_pattern> to filenames (with the Perl glob() function) and use the files as input sources. An input file can be a regular file, named pipe, or standard input if - was specified. Multiple --input options can be specified at command line. Each time SEC receives the SIGHUP or SIGABRT signal, <file_pattern> is re-evaluated. If SEC experiences a system error when reading from an input file, it will close the file (use the --reopen-timeout option for reopening the file). If <context> is given, SEC will set up the context <context> each time it reads a line from input files that correspond to <file_pattern>. This will help the user to write rules that match data from particular input source(s) only. When there is an --input option with <context> specified, it will automatically enable the --intcontexts option. See INTERNAL EVENTS AND CONTEXTS section for more information. --input-timeout=<input_timeout>, --timeout-script=<timeout_script> if SEC has not observed new data in an input file during <input_timeout> seconds (or the file was closed <input_timeout> seconds ago), <timeout_script> will be executed with command line parameters 1 and <the name of the input file>. If fresh data become available again, <timeout_script> will be executed with command line parameters 0 and <the name of the input file>. Setting <input_timeout> to 0 disables this behavior (this is also the default). Note that --input_timeout and --timeout_script options can be used as synonyms for --input-timeout and --timeout-script, respectively. --reopen-timeout=<reopen_timeout> if an input file is in the closed state (e.g., SEC fails to open the file at startup, because it has not been created yet), SEC will attempt to reopen the file after every <reopen_timeout> seconds until open succeeds. Setting <reopen_timeout> to 0 disables this behavior (this is also the default). This option has no meaning when the --notail option is also specified. Note that --reopen_timeout is a synonym for --reopen-timeout. --check-timeout=<check_timeout> if SEC has not observed new data in an input file, the file will not be polled (both for status and data) during the next <check_timeout> seconds. Setting <check_timeout> to 0 disables this behavior (this is also the default). Note that --check_timeout is a synonym for --check-timeout. --poll-timeout=<poll_timeout> a real number that specifies how many seconds SEC will sleep when no new data were read from input files. Default is 0.1 seconds. Note that --poll_timeout is a synonym for --poll-timeout. --socket-timeout=<socket_timeout> if a network connection to a remote peer can't be established within <socket_timeout> seconds, give up. Default is 60 seconds. Note that --socket_timeout is a synonym for --socket-timeout. --blocksize=<io_block_size> the number of bytes SEC will attempt to read at once from an input file. Default is 8192 bytes (i.e., read from input files by 8KB blocks). --bufsize=<input_buffer_size> set all input buffers to hold <input_buffer_size> lines. The content of input buffers will be compared with patterns that are part of rule definitions (i.e., no more than <input_buffer_size> lines can be matched by a pattern at a time). If <input_buffer_size> is set to 0, SEC will determine the proper value for <input_buffer_size> by checking event matching patterns of all SEC rules. Default is 0 (i.e., determine the size of input buffers automatically). --evstoresize=<event_store_size> set an upper limit to the number of events in context event stores. Default is 0 which sets no limit. --cleantime=<clean_time> time interval in seconds that specifies how often internal event correlation and context lists are processed, in order to accomplish time-related tasks and to remove obsolete elements. See INPUT PROCESSING AND TIMING section for more information. Default is 1 second. --log=<logfile> use <logfile> for logging SEC activities. Note that if the SEC standard error is connected to a terminal, messages will also be logged there, in order to facilitate debugging. --syslog=<facility> use syslog for logging SEC activities. All messages will be logged with the facility <facility>, e.g., local0 (see syslog(3) for possible facility values). Warning: be careful with using this option if SEC is employed for monitoring syslog log files, because message loops might occur. --debug=<debuglevel> set logging verbosity for SEC. Setting debuglevel to <debuglevel> means that all messages of level <debuglevel> and lower are logged (e.g., if <debuglevel> is 3, messages from levels 1-3 are logged). The following levels are recognized by SEC: 1 - critical messages (severe faults that cause SEC to terminate, e.g., a failed system call) 2 - error messages (faults that need attention, e.g., an incorrect rule definition in a configuration file) 3 - warning messages (possible faults, e.g., a command forked from SEC terminated with a non-zero exit code) 4 - notification messages (normal system level events and interrupts, e.g., the reception of a signal) 5 - informative messages (information about external programs forked from SEC) 6 - debug messages (detailed information about all SEC activities) Default <debuglevel> is 6 (i.e., log everything). See SIGNALS section for information on how to change <debuglevel> at runtime. --pid=<pidfile> SEC will store its process ID to <pidfile> at startup. --dump=<dumpfile> SEC will use <dumpfile> as its dump file for writing performance and debug data. With the --dumpfts option, a timestamp suffix is appended to the dump file name. With the --dumpfjson option, dump file is produced in JSON format. See SIGNALS section for more information. Default is /tmp/sec.dump. --user=<username>, --group=<groupname> if SEC is started with effective user ID 0, it will drop root privileges by switching to user <username> and group <groupname>. The --group option can't be used without the --user option. If the --user option is given without --group, primary group of the user <username> is assumed for <groupname>. If several groups are provided with multiple --group options, SEC switches to the first group with other groups as supplementary groups. --umask=<mode> set file mode creation mask to <mode> at SEC startup, where <mode> is a value from the range 0..0777 (see also umask(2)). Octal, decimal, hexadecimal, and binary values can be specified for <mode> (e.g., octal mask 0027 can also be expressed as 23, 0x17, and 0b000010111). --ruleperf, --noruleperf if the --ruleperf option is specified, performance data (e.g., total consumed CPU time) is collected for each rule and reported in dump file. Default is --noruleperf. --dumpfts, --nodumpfts if the --dumpfts option is specified, a timestamp suffix (seconds since Epoch) is appended to the dump file name that reflects the file creation time. Default is --nodumpfts. --dumpfjson, --nodumpfjson if the --dumpfjson option is specified, dump file is produced in JSON format. Default is --nodumpfjson. --quoting, --noquoting if the --quoting option is specified, operation description strings that are supplied to command lines of shellcmd, spawn, and cspawn actions will be put inside single quotes. Each single quote (') that strings originally contain will be masked. This option prevents the shell from interpreting special symbols that operation description strings might contain. Default is --noquoting. --tail, --notail if the --notail option is specified, SEC will process all data that are currently available in input files and exit after reaching all EOFs. If all input is received from a pipe and the --notail option is given, SEC terminates when the last writer closes the pipe (EOF condition). Please note that with named pipes --notail should be used with --norwfifo. With the --tail option, SEC will jump to the end of input files and wait for new lines to arrive. Each input file is tracked both by its name and i-node, and input file rotations are handled seamlessly. If the input file is recreated or truncated, SEC will reopen it and process its content from the beginning. If the input file is removed (i.e., there is just an i-node left without a name), SEC will keep the i-node open and wait for the input file recreation. Default is --tail. --fromstart, --nofromstart these flags have no meaning when the --notail option is also specified. When used in combination with --tail (or alone, since --tail is enabled by default), --fromstart will force SEC to read and process input files from the beginning to the end, before the 'tail' mode is entered. Default is --nofromstart. --detach, --nodetach if the --detach option is specified, SEC will disassociate itself from the controlling terminal and become a daemon at startup (note that SEC will close its standard input, standard output, and standard error, and change its working directory to the root directory). Default is --nodetach. --jointbuf, --nojointbuf if the --jointbuf option is specified, SEC uses joint input buffer for all input sources (the size of the buffer is set with the --bufsize option). The --nojointbuf option creates a separate input buffer for each input file, and a separate buffer for all synthetic and internal events (the sizes of all buffers are set with the --bufsize option). The --jointbuf option allows multiline patterns to match lines from several input sources, while the --nojointbuf pattern restricts the matching to lines from one input source only. See INPUT PROCESSING AND TIMING section for more information. If the size of input buffer(s) is 1 (either explicitly set with --bufsize=1 or automatically determined from SEC rules), --jointbuf option is enabled, otherwise the default is --nojointbuf. --keepopen, --nokeepopen if the --keepopen option is specified, SEC will keep input files open across soft restarts. When the SIGABRT signal is received, SEC will not reopen input files which have been opened previously, but will only open input files which are in the closed state. The --nokeepopen option forces SEC to close and (re)open all input files during soft restarts. Default is --keepopen. --rwfifo, --norwfifo if the --norwfifo option is specified, named pipe input files are opened in read- only mode. In this mode, the named pipe has to be reopened when the last writer closes the pipe, in order to clear the EOF condition on the pipe. With the --rwfifo option, named pipe input files are opened in read-write mode, although SEC never writes to the pipes. In this mode, the pipe does not need to be reopened when an external writer closes it, since there is always at least one writer on the pipe and EOF will never appear. Therefore, if the --notail option has been given, --norwfifo should also be specified. Default is --rwfifo. --childterm, --nochildterm if the --childterm option is specified, SEC will send the SIGTERM signal to all its child processes when it terminates or goes through a full restart. Default is --childterm. --intevents, --nointevents SEC will generate internal events when it starts up, when it receives certain signals, and when it terminates gracefully. Specific rules can be written to match those internal events, in order to accomplish special tasks at SEC startup, restart, and shutdown. See INTERNAL EVENTS AND CONTEXTS section for more information. Default is --nointevents. --intcontexts, --nointcontexts SEC will create an internal context when it reads a line from an input file. This will help the user to write rules that match data from particular input source only. See INTERNAL EVENTS AND CONTEXTS section for more information. Default is --nointcontexts. --testonly, --notestonly if the --testonly option is specified, SEC will exit immediately after parsing the configuration file(s). If the configuration file(s) contained no faulty rules, SEC will exit with 0, otherwise with 1. Default is --notestonly. --help, -? SEC will output usage information and exit. --version SEC will output version information and exit. Note that options can be introduced both with the single dash (-) and double dash (--), and both the equal sign (=) and whitespace can be used for separating the option name from the option value. For example, -conf=<file_pattern> and --conf <file_pattern> options are equivalent.
CONFIGURATION FILES
Each SEC configuration file consists of rule definitions which are separated by empty lines, whitespace lines and/or comment lines. Each rule definition consists of keyword=value fields, one keyword and value per line. Values are case insensitive only where character case is not important (like the values specifying rule types, e.g., 'Single' and 'single' are treated identically). The backslash character (\) may be used at the end of a line to continue the current rule field in the next line. Lines which begin with the number sign (#) are treated as comments and ignored (whitespace characters may precede #). Any comment line, empty line, whitespace line, or end of file will terminate the preceding rule definition. For inserting comments into rule definitions, the rem keyword can be used. For example, the following lines define two rules: type=Single rem=this rule matches any line which contains \ three consecutive A characters and writes the string \ "three A characters were observed" to standard output ptype=SubStr pattern=AAA desc=Three A characters action=write - three A characters were observed # This comment line ends preceding rule definition. # The following rule works like the previous rule, # but looks for three consecutive B characters and # writes the string "three B characters were observed" # to standard output type=Single ptype=SubStr pattern=BBB desc=Three B characters action=write - three B characters were observed Apart from keywords that are part of rule definitions, label keywords may appear anywhere in the configuration file. The value of each label keyword will be treated as a label that can be referred to in rule definitions as a point-of-continue. This allows for continuing event processing at a rule that follows the label, after the current rule has matched and processed the event. The points-of-continue are defined with continue* fields. Accepted values for these fields are: TakeNext after an event has matched the rule, search for matching rules in the configuration file will continue from the next rule. GoTo <label> after an event has matched the rule, search for matching rules will continue from the location of <label> in the configuration file (<label> must be defined with the label keyword anywhere in the configuration file *after* the current rule definition). DontCont (default value) after an event has matched the rule, search for matching rules ends in the *current* configuration file. EndMatch after an event has matched the rule, search for matching rules ends for *all* configuration files. SEC rules from the same configuration file are matched against input in the order they have been given in the file. For example, consider a configuration file which contains the following rule sequence: type=Single ptype=SubStr pattern=AAA rem=after this rule has matched, continue from last rule continue=GoTo lastRule desc=Three A characters action=write - three A characters were observed type=Single ptype=SubStr pattern=BBB rem=after this rule has matched, don't consider following rules, \ since 'continue' defaults to 'DontCont' desc=Three B characters action=write - three B characters were observed type=Single ptype=SubStr pattern=CCC rem=after this rule has matched, continue from next rule continue=TakeNext desc=Three C characters action=write - three C characters were observed label=lastRule type=Single ptype=SubStr pattern=DDD desc=Three D characters action=write - three D characters were observed For the input line "AAABBBCCCDDD", this ruleset writes strings "three A characters were observed" and "three D characters were observed" to standard output. If the input line is "BBBCCCDDD", the string "three B characters were observed" is written to standard output. For the input line "CCCDDD", strings "three C characters were observed" and "three D characters were observed" are sent to standard output, while the input line "DDD" produces the output string "three D characters were observed". If there are two or more configuration files, rule sequence from every file is matched against input (unless explicitly specified otherwise). For example, suppose SEC is started with the command line /usr/bin/sec --input=- \ --conf=/etc/sec/sec1.rules --conf=/etc/sec/sec2.rules and the configuration file /etc/sec/sec1.rules has the following content: type=Single ptype=SubStr pattern=AAA desc=Three A characters action=write - three A characters were observed type=Single ptype=SubStr pattern=BBB continue=EndMatch desc=Three B characters action=write - three B characters were observed Also, suppose the configuration file /etc/sec/sec2.rules has the following content: type=Single ptype=SubStr pattern=CCC desc=Three C characters action=write - three C characters were observed If SEC receives the line "AAABBBCCC" from standard input, rules from both configuration files are tried, and as a result, the strings "three A characters were observed" and "three C characters were observed" are written to standard output. Note that rules from /etc/sec/sec1.rules are tried first against the input line, since the option --conf=/etc/sec/sec1.rules is given before --conf=/etc/sec/sec2.rules in the SEC command line (see also INPUT PROCESSING AND TIMING section for a more detailed discussion). If SEC receives the line "BBBCCC" from standard input, the second rule from /etc/sec/sec1.rules produces a match, and the string "three B characters were observed" is written to standard output. Since the rule contains continue=EndMatch statement, the search for matching rules will end for all configuration files, and rules from /etc/sec/sec2.rules will not be not tried. Without this statement, the search for matching rules would continue in /etc/sec/sec2.rules, and the first rule would write the string "three C characters were observed" to standard output.
PATTERNS, PATTERN TYPES AND MATCH VARIABLES
Patterns and pattern types are defined with pattern* and ptype* rule fields. Many pattern types define the number of lines N which the pattern matches (if N is omitted, 1 is assumed). If N is greater than 1, the scope of matching is set with the --jointbuf and --nojointbuf options. With --jointbuf, the pattern is used for matching N last input lines taken from the joint input buffer (the lines can come from different input sources). With --nojointbuf, the source of the last input line is identified, and the pattern is matched with N last input lines from the input buffer of the identified source. SubStr[N] pattern is a string that is searched in the last N input lines L1, L2, ..., LN. If N is greater than 1, the input lines are joined into a string "L1<NEWLINE>L2<NEWLINE>...<NEWLINE>LN", and the pattern string will be searched from it. If the pattern string is found in input line(s), the pattern matches. Backslash sequences \t, \n, \r, \s, and \0 can be used in the pattern for denoting tabulation, newline, carriage return, space character, and empty string, respectively, while \\ denotes backslash itself. For example, consider the following pattern definition: ptype=substr pattern=Backup done:\tsuccess The pattern matches lines containing "Backup done:<TAB>success". Note that since the SubStr[N] pattern type has been designed for fast matching, it does not support match variables. RegExp[N] pattern is a Perl regular expression (see perlre(1) for more information) for matching the last N input lines L1, L2, ..., LN. If N is greater than 1, the input lines are joined into a string "L1<NEWLINE>L2<NEWLINE>...<NEWLINE>LN", and the regular expression is matched with this string. If the regular expression matches, match variables will be set, and these match variables can be used in other parts of the rule definition. In addition to numbered match variables ($1, $2, etc.), SEC supports named match variables $+{name} and the $0 variable. The $0 variable holds the entire string of last N input lines that the regular expression has matched. Named match variables can be created in newer versions of Perl regular expression language, e.g., (?<myvar>AB|CD) sets $+{myvar} to AB or CD. Also, SEC creates special named match variables $+{_inputsrc} and $+{_intcontext}. The $+{_inputsrc} variable holds input file name(s) where matching line(s) came from. The $+{_intcontext} variable holds the name of current internal context (see INTERNAL EVENTS AND CONTEXTS section for more information). If internal context has not been set up for the current input source, the variable is set to Perl undefined value. For example, the following pattern matches the SSH "Connection from" event, and sets $0 to the entire event line, both $1 and $+{ip} to the IP address of the remote node, and $2 to the port number at the remote node: ptype=RegExp pattern=sshd\[\d+\]: Connection from (?<ip>[\d.]+) port (\d+) If the matching event comes from input file /var/log/messages with internal context MSGS, the $+{_inputsrc} and $+{_intcontext} variables are set to strings "/var/log/messages" and "MSGS", respectively. Also, SEC allows for match caching and for the creation of additional named match variables through variable maps which are defined with the varmap* fields. Variable map is a list of name=number mappings separated by semicolons, where name is the name for the named variable and number identifies a numbered match variable that is set by the regular expression. Each name must begin with a letter and consist of letters, digits and underscores. After the regular expression has matched, named variables specified by the map are created from corresponding numbered variables. If the same named variable is set up both from the regular expression and variable map, the map takes precedence. If name is not followed by the equal sign and number in the varmap* field, it is regarded as a common name for all match variables and their values from a successful match. This name is used for caching a successful match by the pattern -- match variables and their values are stored in the memory-based pattern match cache under name. Cached match results can be reused by Cached and NCached patterns. Note that before processing each new input line, previous content of the pattern match cache is cleared. Also note that a successful pattern match is cached even if the subsequent context expression evaluation yields FALSE (see INPUT PROCESSING AND TIMING section for more information). For example, consider the following pattern definition: ptype=regexp pattern=(?i)(\S+\.mydomain).*printer: toner\/ink low varmap=printer_toner_or_ink_low; message=0; hostname=1 The pattern matches "printer: toner/ink low" messages in a case insensitive manner from printers belonging to .mydomain. Note that the printer hostname is assigned to $1 and $+{hostname}, while the whole message line is assigned to $0 and $+{message}. If the message comes from file /var/log/test which does not have an internal context defined, the $+{_inputsrc} variable is set to string "/var/log/test", while $+{_intcontext} is set to Perl undefined value. Also, these variables and their values are stored to the pattern match cache under the name "printer_toner_or_ink_low". The following pattern definition produces a match if the last two input lines are AAA and BBB: ptype=regexp2 pattern=^AAA\nBBB$ varmap=aaa_bbb Note that with the --nojointbuf option the pattern only matches if the matching lines are coming from the *same* input file, while the --jointbuf option lifts that restriction. In the case of a match, $0 is set to "AAA<NEWLINE>BBB", $+{_inputsrc} to file name(s) for matching lines, and $+{_intcontext} to the name of current internal context. Also, these variable-value pairs are cached under the name "aaa_bbb". PerlFunc[N] pattern is a Perl function for matching the last N input lines L1, L2, ..., LN. The Perl function is compiled at SEC startup with the Perl eval() function, and eval() must return a code reference for the pattern to be valid (see also PERL INTEGRATION section). The function is called in Perl list context, and with the --jointbuf option, lines L1, L2, ..., LN and the names of corresponding input files F1, F2, ..., FN are passed to the function as parameters: function(L1, L2, ..., LN, F1, F2, ..., FN) Note that with the --nojointbuf option, the function is called with a single file name parameter F, since lines L1, ..., LN are coming from the same input file: function(L1, L2, ..., LN, F) Also note that if the input line is a synthetic event, the input file name is Perl undefined value. If the function returns several values or a single value that is true in Perl boolean context, the pattern matches. If the function returns no values or a single value that is false in Perl boolean context (0, empty string or undefined value), the pattern does not match. If the pattern matches, return values will be assigned to numbered match variables ($1, $2, etc.). Like with RegExp patterns, the $0 variable is set to matching input line(s), the $+{_inputsrc} variable is set to input file name(s), the $+{_intcontext} variable is set to the name of current internal context, and named match variables can be created from variable maps. For example, consider the following pattern definition: ptype=perlfunc2 pattern=sub { return ($_[0] cmp $_[1]); } The pattern compares last two input lines in a stringwise manner ($_[1] holds the last line and $_[0] the preceding one), and matches if the lines are different. Note that the result of the comparison is assigned to $1, while two matching lines are concatenated (with the newline character between them) and assigned to $0. If matching lines come from input file /var/log/mylog with internal context TEST, the $+{_inputsrc} and $+{_intcontext} variables are set to strings "/var/log/mylog" and "TEST", respectively. The following pattern produces a match for any line, and sets $1, $2 and $3 variables to strings "abc", "def" and "ghi", respectively (also, $0 is set to the whole input line, $+{_inputsrc} to the input file name, and $+{_intcontext} to the name of internal context associated with input file $+{_inputsrc}): ptype=perlfunc pattern=sub { return ("abc", "def", "ghi"); } The following pattern definition produces a match if the input line is not a synthetic event and contains either the string "abc" or "def". The $0 variable is set to the matching line. If matching line comes from /var/log/test without an internal context, $+{_intcontext} is set to Perl undefined value, while $1, $+{file} and $+{_inputsrc} are set to string "/var/log/test": ptype=perlfunc pattern=sub { if (defined($_[1]) && $_[0] =~ /abc|def/) \ { return $_[1]; } return 0; } varmap= file=1 Finally, if a function pattern returns a single value which is a reference to a Perl hash, named match variables are created from key-value pairs in the hash. For example, the following pattern matches a line if it contains either the string "three" or "four". Apart from setting $0, $+{_inputsrc} and $+{_intcontext}, the pattern also creates match variables $+{three} and $+{four}, and sets them to 3 and 4, respectively: ptype=perlfunc pattern=sub { my(%hash); \ if ($_[0] !~ /three|four/) { return 0; } \ $hash{"three"} = 3; $hash{"four"} = 4; return \%hash; } Cached pattern is a name that is searched in the pattern match cache (entries are stored into the cache with the varmap* fields). If an entry with the given name is found in the cache, the pattern matches, and match variables and values are retrieved from the cache. For example, if the input line matches the following pattern ptype=perlfunc pattern=sub { if (defined($_[1]) && $_[0] =~ /abc|def/) \ { return $_[1]; } return 0; } varmap=abc_or_def_found; file=1 then the entry "abc_or_def_found" is created in the pattern match cache. Therefore, the pattern ptype=cached pattern=abc_or_def_found will also produce a match for this input line, and set the $0, $1, $+{file}, $+{_inputsrc}, and $+{_intcontext} variables to values from the previous match. NSubStr[N] like SubStr[N], except that the result of the match is negated. Note that this pattern type does not support match variables. NRegExp[N] like RegExp[N], except that the result of the match is negated and variable maps are not supported. Note that the only match variables supported by this pattern type are $0, $+{_inputsrc}, and $+{_intcontext}. NPerlFunc[N] like PerlFunc[N], except that the result of the match is negated and variable maps are not supported. Note that the only match variables supported by this pattern type are $0, $+{_inputsrc}, and $+{_intcontext}. NCached like Cached, except that the result of the match is negated. Note that this pattern type does not support match variables. TValue pattern is a truth value, with TRUE and FALSE being legitimate values. TRUE always matches an input line, while FALSE never matches anything. Note that this pattern type does not support match variables. When match variables are substituted, each "$$" sequence is interpreted as a literal dollar sign ($) which allows for masking match variables. For example, the string "Received $$1" becomes "Received $1" after substitution, while "Received $$$1" becomes "Received $<value_of_1st_var>". In order to disambiguate numbered match variables from the following text, variable number must be enclosed in braces. For example, the string "Received ${1}0" becomes "Received <value_of_1st_var>0" after substitution, while the string "Received $10" would become "Received <value_of_10th_var>". If the match variable was not set by the pattern, it is substituted with an empty string (i.e., a zero-width string). Thus the string "Received $10!" becomes "Received !" after substitution if the pattern did not set $10. (Note that prior to SEC-2.6, unset variables were *not* substituted.) In the current version of SEC, names of $+{name} match variables must comply with the following naming convention -- the first character can be a letter or underscore, while remaining characters can be letters, digits, underscores and exclamation marks (!). However, when setting named match variables from a pattern, it is recommended to begin the variable name with a letter, since names of special automatically created variables begin with an underscore (e.g., $+{_inputsrc}). After the pattern has matched an event and match variables have been set, it is also possible to refer to previously cached match variables with the syntax $:{entryname:varname}, where entryname is the name of the pattern match cache entry, and varname is the name of the variable stored under the entry. For example, if the variable $+{ip} has been previously cached under the entry "SSH", it can be referred as $:{SSH:ip}. For the reasons of efficiency, the $:{entryname:varname} syntax is not supported for fast pattern types which do not set match variables (i.e., SubStr, NSubStr, NCached and TValue). Note that since Pair and PairWithWindow rules have two patterns, match variables of the first pattern are shadowed for some rule fields when the second pattern matches and sets variables. In order to refer to shadowed variables, their names must begin with % instead of $ (e.g., %1 refers to match variable $1 set by the first pattern). However, the use of the %-prefix is only valid under the following circumstances -- *both* pattern types support match variables *and* in the given rule field match variables from *both* patterns can be used. The %-prefixed match variables are masked with the "%%" sequence (like regular match variables with "$$"). Similarly, the braces can be used for disambiguating the %-prefixed variables from the following text. Finally, note that the second pattern of Pair and PairWithWindow rules may contain match variables if the second pattern is of type SubStr, NSubStr, Regexp, or NRegExp. The variables are substituted at runtime with the values set by the first pattern. If the pattern is a regular expression, all special characters inside substituted values are masked with the Perl quotemeta() function and the final expression is checked for correctness.
CONTEXTS AND CONTEXT EXPRESSIONS
A SEC context is a memory based entity which has one or more names, a lifetime, and an event store. Also, an action list can be set up for a context which is executed immediately before the context expires. For example, the action create MYCONTEXT 3600 (report MYCONTEXT /bin/mail root@localhost) creates the context MYCONTEXT which has a lifetime of 3600 seconds and empty event store. Also, immediately before MYCONTEXT expires and is dropped from memory, the action report MYCONTEXT /bin/mail root@localhost is executed which mails the event store of MYCONTEXT to root@localhost. Contexts can be used for event aggregation and reporting. Suppose the following actions are executed in this order: create MYCONTEXT add MYCONTEXT This is a test alias MYCONTEXT MYALIAS add MYALIAS This is another test report MYCONTEXT /bin/mail root@localhost delete MYALIAS The first action creates the context MYCONTEXT with infinite lifetime and empty event store. The second action appends the string "This is a test" to the event store of MYCONTEXT. The third action sets up an alias name MYALIAS for the context (names MYCONTEXT and MYALIAS refer to the same context data structure). The fourth action appends the string "This is another test" to the event store of the context. The fifth action writes the lines This is a test This is another test to the standard input of the /bin/mail root@localhost command. The sixth action deletes the context data structure from memory and drops its names MYCONTEXT and MYALIAS. Since contexts are accessible from all rules and event correlation operations, they can be used for data sharing and joining several rules into one event correlation scheme. In order to check for the presence of contexts from rules, context expressions can be employed. Context expressions are boolean expressions that are defined with the context* rule fields. Context expressions can be used for restricting the matches produced by patterns, since if the expression evaluates FALSE, the rule will not match an input event. The context expression accepts context names, Perl miniprograms, Perl functions, and pattern match cache lookups as operands. These operands can be combined with the following operators: ! - logical NOT, && - short-circuit logical AND, || - short-circuit logical OR. In addition, parentheses can be used for grouping purposes. If the operand does not contain any special operators (such as -> or :>, see below), it is treated as a context name. Context name operands may contain match variables, but may not contain whitespace. If the context name refers to an existing context, the operand evaluates TRUE, otherwise it evaluates FALSE. For example, consider the following rule sequence: type=Single ptype=RegExp pattern=Test: (\d+) desc=test action=create CONT_$1 type=Single ptype=RegExp pattern=Test2: (\d+) (\d+) context=CONT_$1 && CONT_$2 desc=test action=write - Both $1 and $2 have been seen in the past If the following input lines appear in this order Test: 19 Test: 261 Test2: 19 787 Test: 787 Test2: 787 261 the first input line matches the first rule which creates the context CONT_19, and similarly, the second input line triggers the creation of the context CONT_261. The third input line "Test2: 19 787" matches the regular expression Test2: (\d+) (\d+) but does not match the second rule, since the boolean expression CONT_19 && CONT_787 evaluates FALSE (context CONT_19 exists, but context CONT_787 doesn't). The fourth input line matches the first rule which creates the context CONT_787. The fifth input line "Test2: 787 261" matches the second rule, since the boolean expression CONT_787 && CONT_261 evaluates TRUE (both context CONT_787 and context CONT_261 exist), and therefore the string "Both 787 and 261 have been seen in the past" is written to standard output. If the context expression operand contains the arrow operator (->), the text following the arrow must be a valid Perl function definition that is compiled at SEC startup with the Perl eval() function. The eval() must return a code reference (see also PERL INTEGRATION section for more information). If any text precedes the arrow, it is treated as a list of parameters for the function. Parameters must be separated by whitespace and may contain match variables. In order to evaluate the context expression operand, the Perl function is called in the Perl scalar context. If the return value of the function is true in the Perl boolean context, the operand evaluates TRUE, otherwise it evaluates FALSE. For example, the following rule matches an SSH login failure event if the login attempt comes from a privileged port of the client host: type=Single ptype=RegExp pattern=sshd\[\d+\]: Failed .+ for (\S+) from [\d.]+ port (\d+) ssh2 context=$2 -> ( sub { $_[0] < 1024 } ) desc=SSH login failure for $1 priv port $2 action=write - SSH login failure for user $1 from a privileged port $2 When the following message from SSH daemon appears Dec 16 16:24:59 myserver sshd[13685]: Failed password for risto from 10.12.2.5 port 41063 ssh2 the regular expression of the rule matches this message, and the value of the $2 match variable (41063) is passed to the Perl function sub { $_[0] < 1024 } This function returns true if its input parameter is less than 1024 and false otherwise, and therefore the above message will not match the rule. However, the following message Dec 16 16:25:17 myserver sshd[13689]: Failed password for risto from 10.12.2.5 port 1023 ssh2 matches the rule, and the string "SSH login failure for user risto from a privileged port 1023" is written to standard output. As another example, the following context expression evaluates TRUE if the /var/log/messages file does not exist or was last modified more than 1 hour ago (note that the Perl function takes no parameters): context= -> ( sub { my(@stat) = stat("/var/log/messages"); \ return (!scalar(@stat) || time() - $stat[9] > 3600); } ) If the context expression operand contains the :> operator, the text that follows :> must be a valid Perl function definition that is compiled at SEC startup with the Perl eval() function. The eval() must return a code reference (see also PERL INTEGRATION section for more information). If any text precedes the :> operator, it is treated as a list of parameters for the function. Parameters must be separated by whitespace and may contain match variables. It is assumed that each parameter is a name of an entry in the pattern match cache. If an entry with the given name does not exist, Perl undefined value is passed to the function. If an entry with the given name exists, a reference to the entry is passed to the Perl function. Internally, each pattern match cache entry is implemented as a Perl hash which contains all match variables for the given entry. In the hash, each key-value pair represents some variable name and value, e.g., if cached match variable $+{ip} is holding 10.1.1.1, the hash contains the value 10.1.1.1 with the key ip. In order to evaluate the context expression operand, the Perl function is called in the Perl scalar context. If the return value of the function is true in the Perl boolean context, the operand evaluates TRUE, otherwise it evaluates FALSE. For example, consider the following rule sequence: type=Single ptype=RegExp pattern=sshd\[\d+\]: (?<status>Accepted|Failed) .+ \ for (?<invuser>invalid user )?(?<user>\S+) from (?<ip>[\d.]+) \ port (?<port>\d+) ssh2 varmap=SSH continue=TakeNext desc=parse SSH login events and pass them to following rules action=none type=Single ptype=Cached pattern=SSH context=SSH :> ( sub { $_[0]->{"status"} eq "Failed" && \ $_[0]->{"port"} < 1024 && \ defined($_[0]->{"invuser"}) } ) desc=Probe of invalid user $+{user} from privileged port of $+{ip} action=pipe '%t: %s' /bin/mail -s 'SSH alert' root@localhost The first rule matches and parses SSH login messages, and stores parsing results to the pattern match cache under the name SSH. The pattern of the second rule (defined with ptype=Cached and pattern=SSH) matches any input event for which the entry SSH has been previously created in the pattern match cache (in other words, the event has been recognized and parsed as an SSH login message). For each matching event, the second rule passes the reference to the SSH cache entry to the Perl function sub { $_[0]->{"status"} eq "Failed" && \ $_[0]->{"port"} < 1024 && \ defined($_[0]->{"invuser"}) } The function checks the values of $+{status}, $+{port}, and $+{invuser} match variables under the SSH entry, and returns true if $+{status} equals to the string "Failed" (i.e., login attempt failed), the value of $+{port} is less than 1024, and $+{invuser} holds a defined value (i.e., user account does not exist). If the function (and thus context expression) evaluates TRUE, the rule sends a warning e-mail to root@localhost that a non- existing user account was probed from a privileged port of a client host. If the context expression operand begins with the varset keyword, the following string is treated as a name of an entry in the pattern match cache. The operand evaluates TRUE if the given entry exists, and FALSE otherwise. For example, the following context expression definition evaluates TRUE if the pattern match cache entry SSH exists and under this entry, the value of the match variable $+{user} equals to the string "risto": context=varset SSH && SSH :> ( sub { $_[0]->{"user"} eq "risto" } ) If the context expression operand begins with the equal sign (=), the following text must be a Perl miniprogram which is a valid parameter for the Perl eval() function. The miniprogram may contain match variables. In order to evaluate the Perl miniprogram operand, it will be compiled and executed by calling the Perl eval() function in the Perl scalar context (see also PERL INTEGRATION section). If the return value from eval() is true in the Perl boolean context, the operand evaluates TRUE, otherwise it evaluates FALSE. Please note that unlike Perl functions of -> and :> operators which are compiled once at SEC startup, Perl miniprograms are compiled before each execution, and their evaluation is thus considerably more expensive. For example, the following context expression evaluates TRUE when neither the context C1 nor the context C2 exists and the value of the $1 variable equals to the string "myhost.mydomain": context=!(C1 || C2) && =("$1" eq "myhost.mydomain") Since && is a short-circuiting operator, the Perl code "$1" eq "myhost.mydomain" is *not* evaluated if either C1 or C2 exists. Note that since Perl functions and miniprograms may contain strings that clash with context expression operators (e.g., '!'), it is recommended to enclose them in parentheses, e.g., context=$1 $2 -> ( sub { $_[0] != $_[1] } ) context= =({my($temp) = 0; !$temp;}) Also, if function parameter lists contain such strings, they should be enclosed in parentheses in the similar way: context=($1! $2) -> ( sub { $_[0] eq $_[1] } ) If the whole context expression is enclosed in square brackets [], e.g., [MYCONTEXT1 && !MYCONTEXT2], SEC evaluates the expression *before* pattern matching (normally, the pattern is matched with input line(s) first, so that match variables would be initialized and substituted before the expression is evaluated). However, if the expression does not contain match variables and many input events are known to match the pattern but not the expression, the []-operator could save substantial amount of CPU time.
ACTIONS, ACTION LISTS AND ACTION LIST VARIABLES
Action lists are defined with the action* rule fields. An action list consists of action definitions that are separated by semicolons. Each action definition begins with a keyword specifying the action type. Depending on the action type, parameters may follow, and non-constant parameters may contain match variables. For instance, if the $1 and $2 match variables have the values "test1" and "the second test", respectively, the action create MYCONT_$1 60 creates the context MYCONT_test1 with the lifetime of 60 seconds, while the action write - The names of tests: $1, $2 writes the string "The names of tests: test1, the second test" to standard output. Apart from few exceptions explicitly noted, match variables are substituted at the earliest opportunity in action lists. For example, consider the following rule definition: type=SingleWithThreshold ptype=RegExp pattern=sshd\[\d+\]: Failed .+ for (\S+) from [\d.]+ port \d+ ssh2 desc=Three SSH login failures within 1m action=pipe 'Three SSH login failures, first user is $1' \ /bin/mail -s 'SSH login alert' root@localhost window=60 thresh=3 When this rule matches an SSH login failure event which starts an event correlation operation, the operation substitutes the $1 match variable in the action list definition with the user name from the matching event, and user names from further events processed by this event correlation operation are not considered for $1. For example, if the following events are observed Dec 16 16:24:52 myserver sshd[13671]: Failed password for root from 10.12.2.5 port 29736 ssh2 Dec 16 16:24:59 myserver sshd[13685]: Failed password for risto from 10.12.2.5 port 41063 ssh2 Dec 16 16:25:01 myserver sshd[13689]: Failed password for oracle from 10.12.2.5 port 11204 ssh2 then all events are processed by the same operation, and the message "Three SSH login failures, first user is root" is mailed to root@localhost. In order to use semicolons inside a non-constant parameter, the parameter must be enclosed in parentheses (the outermost set of parentheses will be removed by SEC during configuration file parsing). For example, the following action list consists of delete and shellcmd actions: action=delete MYCONTEXT; shellcmd (rm /tmp/sec1.tmp; rm /tmp/sec2.tmp) The delete action deletes the context MYCONTEXT, while the shellcmd action executes the command line rm /tmp/sec1.tmp; rm /tmp/sec2.tmp. Since the command line contains a semicolon, it has been enclosed in parentheses, since otherwise the semicolon would be mistakenly considered a separator between two actions. Apart from match variables, SEC supports action list variables in action lists which facilitate data sharing between actions and Perl integration. Each action list variable has a name which must begin with a letter and consist of letters, digits and underscores. Names of built-in variables usually start with a dot character (.), so that they can be distinguished from user defined variables. In order to refer to an action list variable, its name must be prefixed by a percent sign (%). Unlike match variables, action list variables can only be used in action lists and they are substituted with their values immediately before the action list execution. Also, action list variables continue to exist after the current action list has been executed and can be employed in action lists of other rules. The following action list variables are predefined by SEC: %s operation description string (the value of the desc field after match variables have been substituted with their values). Note that for the action2 field of Pair and PairWithWindow rules, the %s variable is set by evaluating the desc2 field of the rule. %t the time in human-readable format, as returned by the Perl localtime() function in the Perl scalar context (e.g., Fri Feb 19 17:54:18 2016). %u the time in seconds since Epoch, as returned by the time(2) system call. %.sec number of seconds after the minute, in the range 00-59 (the value consists of two digits and is zero padded on the left). %.min number of minutes after the hour, in the range 00-59 (the value consists of two digits and is zero padded on the left). %.hour number of hours past midnight, in the range 00-23 (the value consists of two digits and is zero padded on the left). %.hmsstr the time in HH:MM:SS format (hours, minutes and seconds separated by colons, e.g., 09:32:04 or 18:06:02). %.mday day of the month, in the range 01-31 (the value consists of two digits and is zero padded on the left). %.mdaystr day of the month as a string (the value consists of two characters and is space padded on the left, e.g., " 1", " 4", " 9", or "25"). %.mon month, in the range 01-12 (the value consists of two digits and is zero padded on the left). %.monstr abbreviated name of the month according to the current locale, as returned by the %b specification of the strftime(3) library call (e.g., Jan, May, or Sep). %.year year (e.g., 1998 or 2016). %.wday day of the week, in the range 0-6 (0 denotes Sunday). %.wdaystr abbreviated name of the day of the week according to the current locale, as returned by the %a specification of the strftime(3) library call (e.g., Mon, Wed, or Sat). %.tzname name of the timezone according to the current locale, as returned by the %Z specification of the strftime(3) library call (e.g., UTC or EET). %.tzoff timezone offset from UTC, as returned by the %z specification of the strftime(3) library call (e.g., -0500 or +0200). %.tzoff2 timezone offset from UTC in +hh:mm/-hh:mm format (e.g., -05:00 or +02:00), provided that the %z specification of the strftime(3) library call returns the value in +hhmm/-hhmm format (if the value does not follow this format, %.tzoff2 is set to an empty string). %.nl newline character. %.cr carriage return character. %.tab tabulation character. %.sp space character. %.chr0, ..., %.chr31 ASCII 0..31 control characters (e.g., %.chr7 is bell and %.chr12 is form feed character). For example, the following action list assigns the current time in human-readable format and the string "This is a test event" to the %text action list variable, and mails the value of %text to root@localhost: action=assign %text %t: This is a test event; \ pipe '%text' /bin/mail root@localhost If the action list is executed at Nov 19 10:58:51 2015, the assign action sets the %text action list variable to the string "Thu Nov 19 10:58:51 2015: This is a test event", while the pipe action mails this string to root@localhost. Note that unlike match variables, action list variables have a global scope, and accessing the value of the %text variable in action lists of other rules will thus yield the string "Thu Nov 19 10:58:51 2015: This is a test event" (until another value is assigned to %text). In order to disambiguate the variable from the following text, the variable name must be enclosed in braces. For example, the following action action=write - %{.year}-%{.mon}-%{.mday}T%{.hmsstr}%{.tzoff2} writes a timestamp in ISO 8601 format to standard output, e.g., 2016-02-24T07:34:01+02:00 (replacing %{.mday} with %.mday in the above action would mistakenly create a reference to %.mdayT variable). When action list variables are substituted with their values, each sequence "%%" is interpreted as a literal percent sign (%) which allows for masking the variables. For example, the string "s%%t" becomes "s%t" after substitution, not "s%<timestamp>". However, note that if %-prefixed match variables are supported for the action2 field of the Pair or PairWithWindow rule, the sequence "%%%" must be used in action2 for masking a variable, since the string goes through *two* variable substitution rounds (first for %-prefixed match variables and then for action list variables, e.g., the string "s%%%t" first becomes "s%%t" and finally "s%t"). Whenever a rule field goes through several substitution rounds, the $ or % characters are masked inside values substituted during earlier rounds, in order to avoid unwanted side effects during later rounds. If the action list variable has not been set, it is substituted with an empty string (i.e., a zero-width string). Thus the string "Value of A is: %a" becomes "Value of A is: " after substitution if the variable %a is unset. (Note that prior to SEC-2.6, unset variables were *not* substituted.) Finally, the values are substituted as strings, therefore values of other types (e.g., references) lose their original meaning, unless explicitly noted otherwise (e.g., if a Perl function reference is stored to an action list variable, the function can later be invoked through this variable with the call action). SEC supports the following actions (optional parameters are enclosed in square brackets): none No action. logonly [<string>] Message <string> is logged to destinations given with the --log and --syslog options. The level of the log message is set to 4 (see the --debug option for more information on log message levels). Default value for <string> is %s. For example, consider the following action list definition: action=logonly This is a test The above logonly action logs the message "This is a test" with level 4. write <filename> [<string>] String <string> with a terminating newline is written to the file <filename> (<filename> may not contain whitespace). File may be a regular file, named pipe, or standard output (denoted by -). If the file is a regular file, <string> is appended to the end of the file. If the file does not exist, it is created as a regular file before writing. Note that the file will not be closed after the action completes, and the following write actions will access an already open file. However, several signals cause the file to be closed and reopened, and for rotating files created with write action, the SIGUSR2 signal can be used (see SIGNALS section for more information). Default value for <string> is %s. For example, consider the following action list definition: action=write /var/log/test.log %t $0 The above write action prepends human-readable timestamp and separating space character to the value of the $0 match variable, and the resulting string is appended to file /var/log/test.log with terminating newline. writen <filename> [<string>] Similar to the write action, except that the string <string> is written without a terminating newline. Note that write and writen actions share the same filehandle for accessing the file. For example, consider the following action list definition: action=writen - ab; writen - c; writen - %.nl The above action list writes the string "abc<NEWLINE>" to standard output, and is thus identical to write - abc (and also to writen - abc%.nl). closef <filename> Close the file <filename> that has been previously opened by the write or writen action (<filename> may not contain whitespace). owritecl <filename> [<string>] Similar to the write action, except that the file <filename> is opened and closed at each write. Also, the string <string> is written without a terminating newline. If the file has already been opened by a previous write action, owritecl does not use existing filehandle, but opens and closes the file separately. For example, consider the following action list definition: action=owritecl /var/log/test-%{.year}%{.mon}%{.mday} $0%{.nl} The above owritecl action appends the value of the $0 match variable with terminating newline to file /var/log/test-YYYYMMDD, where YYYYMMDD reflects the current date (e.g., if the current date is April 1 2018, the file is /var/log/test-20180401). Since the file is closed after each write, the old file will not be left open when date changes. udgram <filename> [<string>] String <string> is written to the UNIX datagram socket <filename> (<filename> may not contain whitespace). Note that the socket will not be closed after the action completes, and the following udgram actions will access an already open socket. However, several signals cause the socket to be closed and reopened (see SIGNALS section for more information). Default value for <string> is %s. For example, consider the following action list definition: action=udgram /dev/log <30>%.monstr %.mdaystr %.hmsstr sec: This is a test The above udgram action sends a syslog message to local syslog daemon via /dev/log socket, where message priority is 30 (corresponds to the "daemon" facility and "info" level), syslog tag is "sec" and message text is "This is a test". Note that message substring "%.monstr %.mdaystr %.hmsstr" evaluates to timestamp in BSD syslog format (e.g., Mar 31 15:36:07). closeudgr <filename> Close the UNIX datagram socket <filename> that has been previously opened by the udgram action (<filename> may not contain whitespace). ustream <filename> [<string>] String <string> is written to the UNIX stream socket <filename> (<filename> may not contain whitespace). Note that the socket will not be closed after the action completes, and the following ustream actions will access an already open socket. However, several signals cause the socket to be closed and reopened (see SIGNALS section for more information). Default value for <string> is %s. closeustr <filename> Close the UNIX stream socket <filename> that has been previously opened by the ustream action (<filename> may not contain whitespace). udpsock <host>:<port> [<string>] String <string> is sent to the UDP port <port> of the host <host>. Note that the UDP socket which is used for communication will not be closed after the action completes, and the following udpsock actions for the same remote peer will use an already existing socket. However, several signals cause the socket to be closed and recreated (see SIGNALS section for more information). Default value for <string> is %s. For example, consider the following action list definition: action=udpsock mysrv:514 <13>%.monstr %.mdaystr %.hmsstr myhost test: $0 The above udpsock action sends a BSD syslog message to port 514/udp of remote syslog server mysrv, where message priority is 13 (corresponds to the "user" facility and "notice" level), name of the local host is "myhost", syslog tag is "test" and message text is the value if the $0 match variable. closeudp <host>:<port> Close the UDP socket for peer <host>:<port> that has been previously opened by the udpsock action. tcpsock <host>:<port> [<string>] String <string> is sent to the TCP port <port> of the host <host>. The timeout value given with the --socket-timeout option determines for how many seconds SEC will attempt to establish a connection to the remote peer. If the connection establishment does not succeed immediately, the tcpsock action buffers <string> in memory for later sending to the remote peer. Note that the relevant TCP socket will not be closed after <string> has been transmitted, and the following tcpsock actions for the same peer will use an already existing socket. However, several signals cause the socket to be closed and recreated (see SIGNALS section for more information). Default value for <string> is %s. For example, consider the following action list definition: action=tcpsock grsrv:2003 ssh.login.failures %{num} %{u}%{.nl} The above tcpsock action sends the value of the action list variable %{num} to port 2003/tcp of the Graphite server grsrv, so that the value is recorded under metric path ssh.login.failures. Note that the %{u} action list variable evaluates to current time in seconds since Epoch and is used for setting the timestamp for recorded value. closetcp <host>:<port> Close the TCP socket for peer <host>:<port> that has been previously opened by the tcpsock action. shellcmd <cmdline> Fork a process for executing command line <cmdline>. If <cmdline> contains shell metacharacters, <cmdline> is parsed by shell. If the --quoting option was specified and <cmdline> contains %s variables, the value of %s is quoted with single quotes before substituting it into <cmdline>; if the value of %s contains single quotes, they are masked with backslashes (e.g., abc is converted to 'abc' and aa'bb is converted to 'aa'\''bb'). For additional information, see CHILD PROCESSES section. For example, consider the following action list definition: action=shellcmd (cat /tmp/report | mail root; rm -f /tmp/report); \ logonly Report sent to user root The shellcmd action of this action list executes the command line cat /tmp/report | mail root; rm -f /tmp/report and the logonly action logs the message "Report sent to user root". Since the command line contains a semicolon which is used for separating shellcmd and logonly actions, the command line is enclosed in parentheses. spawn <cmdline> Similar to the shellcmd action, except that each line from the standard output of <cmdline> becomes a synthetic event and will be treated like a line from input file (see the event action for more information). If the --intcontexts command line option is given, internal context _INTERNAL_EVENT is set up before each synthetic event is processed (see INTERNAL EVENTS AND CONTEXTS section for more information). For example, consider the following action list definition: action=spawn (cat /tmp/events; rm -f /tmp/events) The above spawn action will generate synthetic events from all lines in file /tmp/events and remove the file. Since the command line contains a semicolon which is used for separating actions, the command line is enclosed in parentheses. cspawn <name> <cmdline> Similar to the spawn action, except that if the --intcontexts command line option is given, internal context <name> is set up for each synthetic event. cmdexec <cmdline> Fork a process for executing command line <cmdline>. Unlike shellcmd action, <cmdline> is not parsed by shell, but split into arguments by using whitespace as a separator, and passed to execvp(3) for execution. Note that splitting into arguments is done when cmdexec action is loaded from the configuration file and parsed, not at runtime (e.g., if <cmdline> is /usr/local/bin/mytool $1 $2, the values of $1 and $2 variables are regarded as single arguments even if the values contain whitespace). For additional information, see CHILD PROCESSES section. For example, consider the following action list definition: action=cmdexec rm /tmp/report* The above cmdexec action will remove the file /tmp/report* without treating * as a file pattern character that matches any string. spawnexec <cmdline> Similar to the cmdexec action, except that each line from the standard output of <cmdline> becomes a synthetic event and will be treated like a line from input file (see the event action for more information). If the --intcontexts command line option is given, internal context _INTERNAL_EVENT is set up before each synthetic event is processed (see INTERNAL EVENTS AND CONTEXTS section for more information). cspawnexec <name> <cmdline> Similar to the spawnexec action, except that if the --intcontexts command line option is given, internal context <name> is set up for each synthetic event. pipe '[<string>]' [<cmdline>] Fork a process for executing command line <cmdline>. If <cmdline> contains shell metacharacters, <cmdline> is parsed by shell. The string <string> with a terminating newline is written to the standard input of <cmdline> (single quotes are used for disambiguating <string> from <cmdline>). If <string> contains semicolons, <string> must be enclosed in parentheses (e.g., pipe '($1;$2)' /bin/cat). Default value for <string> is %s. If <cmdline> is omitted, <string> is written to standard output. For additional information, see CHILD PROCESSES section. For example, consider the following action list definition: action=pipe 'Offending activities from host $1' /bin/mail root@localhost The above pipe action writes the line "Offending activities from host <hostname>" to the standard input of the /bin/mail root@localhost command which sends this line to root@localhost via e-mail (<hostname> is the value of the $1 match variable). pipeexec '[<string>]' [<cmdline>] Similar to the pipe action, except <cmdline> is not parsed by shell, but split into arguments by using whitespace as a separator, and passed to execvp(3) for execution. Note that splitting into arguments is done when pipeexec action is loaded from the configuration file and parsed, not at runtime (e.g., if <cmdline> is /usr/local/bin/mytool $1 $2, the values of $1 and $2 variables are regarded as single arguments even if the values contain whitespace). For example, consider the following action list definition: action=pipeexec 'Offending activities from host $1' \ /bin/mail -s SEC%{.sp}alert $2 The above pipeexec action writes the line "Offending activities from host <hostname>" to the standard input of the /bin/mail -s <subject> <user> command which sends this line to <user> via e-mail with subject <subject> (<hostname> is the value of the $1 match variable, while <user> is the value of the $2 match variable). Note that since <subject> is defined as SEC%{.sp}alert and does not contain whitespace, it is treated as a single argument for the -s flag of the /bin/mail command. However, since <subject> contains the %.sp action list variable, the string "SEC alert" will be used for the e-mail subject at runtime. Also, if the value of the $2 match variable contains shell metacharacters, they will not be interpreted by the shell. create [<name> [<time> [<action list>] ] ] Create a context with the name <name>, lifetime of <time> seconds, and empty event store. The <name> parameter may not contain whitespace and defaults to %s. The <time> parameter must evaluate to an unsigned integer at runtime. Specifying 0 for <time> or omitting the value means infinite lifetime. If <action list> is given, it will be executed when the context expires. If <action list> contains several actions, the list must be enclosed in parentheses. In <action list>, the internal context name _THIS may be used for referring to the current context (see INTERNAL EVENTS AND CONTEXTS section for a detailed discussion). If an already existing context is recreated with create, its remaining lifetime is set to <time> seconds, its action list is reinitialized, and its event store is emptied. For example, consider the following action list definition: action=write /var/log/test.log $0; create TIMER 3600 \ ( logonly Closing /var/log/test.log; closef /var/log/test.log ) The write action from the above action list appends the value of the $0 match variable to file /var/log/test.log, while the create action creates the context TIMER which will exist for 3600 seconds. Since this context is recreated at each write, the context can expire only if the action list has not been executed for more than 3600 seconds (i.e., the action list has last updated the file more than 1 hour ago). If that is the case, the action list logonly Closing /var/log/test.log; closef /var/log/test.log is executed which logs the message "Closing /var/log/test.log" with the logonly action and closes /var/log/test.log with the closef action. When the execution of this action list is complete, the TIMER context is deleted. delete [<name>] Delete the context <name>. The <name> parameter may not contain whitespace and defaults to %s. obsolete [<name>] Similar to the delete action, except that the action list of the context <name> (if present) is executed before deletion. set <name> <time> [<action list>] Change settings for the context <name>. The creation time of the context is set to the current time, and the lifetime of the context is set to <time> seconds. If the <action list> parameter is given, the context action list is set to <action list>, otherwise the context action list is not changed. The <name> parameter may not contain whitespace and defaults to %s. The <time> parameter must evaluate to an unsigned integer or hyphen (-) at runtime. Specifying 0 for <time> means infinite lifetime. If <time> equals to -, the creation time and lifetime of the context are not changed. If <action list> contains several actions, the list must be enclosed in parentheses. In <action list>, the internal context name _THIS may be used for referring to the current context (see INTERNAL EVENTS AND CONTEXTS section for a detailed discussion). For example, consider the following action list definition: action=set C_$1 30 ( logonly Context C_$1 has expired ) The above set action sets the context C_<suffix> to expire after 30 seconds with a log message about expiration (<suffix> is the value of the $1 match variable). alias <name> [<alias>] Create an alias name <alias> for the context <name>. After creation, both <alias> and <name> will point to the same context data structure, and can thus be used interchangeably for referring to the context. The <name> and <alias> parameters may not contain whitespace, and <alias> defaults to %s. If the context <name> does not exist, the alias name is not created. If the delete action is called for one of the context names, the context data structure is destroyed, and all context names (which are now pointers to unallocated memory) cease to exist. Also note that when the context expires, its action list is executed only once, no matter how many names the context has. unalias [<alias>] Drop an existing context name <alias>, so that it can no longer be used for referring to the given context. The <alias> parameter may not contain whitespace and defaults to %s. If the name <alias> is the last reference to the context, the unalias action is identical to delete. add <name> [<string>] String <string> is appended to the end of the event store of the context <name>. The <name> parameter may not contain whitespace, and the <string> parameter defaults to %s. If the context <name> does not exist, the context is created with an infinite lifetime, empty action list and empty event store (as with create <name>) before adding the string to event store. If <string> is a multi-line string (i.e., it contains newlines), it is split into lines, and each line is appended to the event store separately. For example, consider the following action list definition: action=add EVENTS This is a test; add EVENTS This is a test2 After the execution of this action list, the last two strings in the event store of the EVENTS context are "This is a test" and "This is a test2" (in that order). prepend <name> [<string>] Similar to the add action, except that the string <string> is prepended to the beginning of the event store of context <name>. For example, consider the following action list definition: action=prepend EVENTS This is a test; prepend EVENTS This is a test2 After the execution of this action list, the first two strings in the event store of the EVENTS context are "This is a test2" and "This is a test" (in that order). fill <name> [<string>] Similar to the add action, except that the event store of the context <name> is emptied before <string> is added. report <name> [<cmdline>] Fork a process for executing command line <cmdline>. If <cmdline> contains shell metacharacters, <cmdline> is parsed by shell. Also, write strings from the event store of the context <name> to the standard input of <cmdline>. Strings are written in the order they appear in the event store, with a terminating newline appended to each string. If the context <name> does not exist or its event store is empty, <cmdline> is not executed. The <name> parameter may not contain whitespace, and if <cmdline> is omitted, strings are written to standard output. For additional information, see CHILD PROCESSES section. For example, consider the following action list definition: action=create PID_$1 60 ( report PID_$1 /bin/mail root@localhost ); \ add PID_$1 Beginning of the report The above action list creates the context PID_<suffix> with the lifetime of 60 seconds and sets the first string in the context event store to "Beginning of the report" (<suffix> is the value of the $1 match variable). When the context expires, all strings from the event store will be mailed to root@localhost. reportexec <name> [<cmdline>] Similar to the report action, except <cmdline> is not parsed by shell, but split into arguments by using whitespace as a separator, and passed to execvp(3) for execution. Note that splitting into arguments is done when reportexec action is loaded from the configuration file and parsed, not at runtime (e.g., if <cmdline> is /usr/local/bin/mytool $1 $2, the values of $1 and $2 variables are regarded as single arguments even if the values contain whitespace). copy <name> %<var> Strings s1,...,sn from the event store of the context <name> are joined into a multi-line string "s1<NEWLINE>...<NEWLINE>sn", and this string is assigned to the action list variable %<var>. If the context <name> does not exist, the value of %<var> does not change. empty <name> [%<var>] Similar to the copy action, except that the event store of the context <name> will be emptied after the assignment. If %<var> is omitted, the content of the event store is dropped without an assignment. pop <name> %<var> Remove the last string from the event store of context <name>, and assign it to the action list variable %<var>. If the event store is empty, %<var> is set to empty string. If the context <name> does not exist, the value of %<var> does not change. shift <name> %<var> Remove the first string from the event store of context <name>, and assign it to the action list variable %<var>. If the event store is empty, %<var> is set to empty string. If the context <name> does not exist, the value of %<var> does not change. exists %<var> <name> If the context <name> exists, set the action list variable %<var> to 1, otherwise set %<var> to 0. getsize %<var> <name> Find the number of strings in the event store of context <name>, and assign this number to the action list variable %<var>. If the context <name> does not exist, %<var> is set to Perl undefined value. For example, consider the following action list definition: action=fill EVENTS Event1; add EVENTS Event2; add EVENTS Event3; \ pop EVENTS %temp1; shift EVENTS %temp2; getsize %size EVENTS This action list sets the %temp1 action list variable to Event3, %temp2 action list variable to Event1, and %size action list variable to 1. getaliases %<var> <name> Find all alias names for context <name>, join the names into a multi-line string "alias1<NEWLINE>...<NEWLINE>aliasn", and assign this string to the action list variable %<var>. If the context <name> does not exist, the value of %<var> does not change. getltime %<var> <name> Find the lifetime of context <name>, and assign this number to the action list variable %<var>. If the context <name> does not exist, the value of %<var> does not change. For example, consider the following action list definition: action=create TEST 10 ( getltime %time TEST; \ logonly Context TEST with %time second lifetime has expired ) The above create action configures the context TEST to log its lifetime when it expires. setltime <name> [<time>] Set the lifetime of context <name> to <time>. Specifying 0 for <time> or omitting the value means infinite lifetime. Note that unlike the set action, setltime does not adjust the context creation time. For example, if context TEST has been created at 12:01:00 with the lifetime of 60 seconds, then after invoking setltime TEST 30 at 12:01:20 the context would exist until 12:01:30, while invoking setltime TEST 10 would immediately expire the context. getctime %<var> <name> Find the creation time of context <name>, and assign this timestamp to the action list variable %<var>. The value assigned to %<var> is measured in seconds since Epoch (as reported by the time(2) system call). If the context <name> does not exist, the value of %<var> does not change. setctime <time> <name> Set the creation time of context <name> to <time>. The <time> parameter must evaluate to seconds since Epoch (as reported by the time(2) system call), and must reflect a time moment between the previous creation time and the current time (both endpoints included). For example, if context TEST has been created at 12:43:00 with the lifetime of 60 seconds, then after invoking setctime %u TEST at 12:43:25 the context would exist until 12:44:25 (the %u action list variable evaluates to current time in seconds since Epoch). event [<time>] [<string>] After <time> seconds, create a synthetic event <string>. If <string> is a multi- line string (i.e., it contains newlines), it is split into lines, and from each line a separate synthetic event is created. SEC will treat each synthetic event like a line from an input file -- the event will be matched against rules and it might trigger further actions. If the --intcontexts command line option is given, internal context _INTERNAL_EVENT is set up for synthetic event(s) (see INTERNAL EVENTS AND CONTEXTS section for more information). The <time> parameter is an integer constant. Specifying 0 for <time> or omitting the value means "now". Default value for <string> is %s. For example, consider the following action list definition: action=copy EVENTS %events; event %events The above action list will create a synthetic event from each string in the event store of the EVENTS context. tevent <time> [<string>] Similar to the event action, except that the <time> parameter may contain variables and must evaluate to an unsigned integer at runtime. cevent <name> <time> [<string>] Similar to the tevent action, except that if the --intcontexts command line option is given, internal context <name> is set up for synthetic event(s). reset [<offset>] [<string>] Terminate event correlation operation(s) with the operation description string <string>. Note that the reset action works only for operations started from the same configuration file. The <offset> parameter is used to refer to a specific rule in the configuration file. If <offset> is given, the operation started by the given rule is terminated (if it exists). If <offset> is an unsigned integer N, it refers to the N-th rule in the configuration file. If <offset> is 0, it refers to the current rule. If <offset> begins with the plus (+) or minus (-) sign, it specifies an offset from the current rule (e.g., -1 denotes the previous and +1 the next rule). Note that since Options rules are only processed when configuration files are loaded and they are not applied at runtime, Options rules are excluded when calculating <offset>. If <offset> is not given, SEC checks for each rule from the current configuration file if an operation with <string> has been started by this rule, and the operation is terminated if it exists. Default value for <string> is %s. For additional information, see EVENT CORRELATION OPERATIONS section. For example, consider the following action list definition: action=reset -1 Ten login failures observed from $1; reset 0 If the above action list is executed by an event correlation operation, the first reset action will terminate another event correlation operation which has been started by the previous rule and has the operation description string "Ten login failures observed from <host>" (<host> is the value of the $1 match variable). The second reset action will terminate the calling operation itself. getwpos %<var> <offset> [<string>] Find the beginning of the event correlation window for an event correlation operation, and set the action list variable %<var> to this timestamp. The value assigned to %<var> is measured in seconds since Epoch (as reported by the time(2) system call). As with the reset action, the event correlation operation is identified by the operation description string <string> and the rule offset <offset>. If the operation does not exist, the value of %<var> does not change. Default value for <string> is %s. For additional information, see EVENT CORRELATION OPERATIONS section. For example, consider the following action list definition: action=getwpos %pos -1 Ten login failures observed from $1 The above getwpos action will find the beginning of the event correlation window for an event correlation operation which has been started by the previous rule and has the operation description string "Ten login failures observed from <host>" (<host> is the value of the $1 match variable). If the event correlation window begins at April 6 08:03:53 2018 UTC, the value 1523001833 will be assigned to the %pos action list variable. setwpos <time> <offset> [<string>] Set the beginning of the event correlation window to <time> for an event correlation operation (if it exists). The <time> parameter must evaluate to seconds since Epoch (as reported by the time(2) system call), and must reflect a time moment between the previous window position and the current time (both endpoints included). As with the reset action, the event correlation operation is identified by the operation description string <string> and the rule offset <offset>. Default value for <string> is %s. For additional information, see EVENT CORRELATION OPERATIONS section. assign %<var> [<string>] Assign string <string> to the action list variable %<var>. Default value for <string> is %s. assignsq %<var> [<string>] Similar to the assign action, except that <string> is quoted with single quotes before assigning it to %<var>. If <string> contains single quotes, they are masked with backslashes (e.g., if the match variable $1 holds the value abc'123'xyz, the action assignsq %myvar $1 assigns the value 'abc'\''123'\''xyz' to the action list variable %myvar). This action is useful for disabling shell interpretation for the values of action list variables that appear in command lines executed by SEC. Default value for <string> is %s. free %<var> Unset the action list variable %<var>. eval %<var> <code> The parameter <code> is a Perl miniprogram that is compiled and executed by calling the Perl eval() function in the Perl list context. If the miniprogram returns a single value, it is assigned to the action list variable %<var>. If the miniprogram returns several values s1,...,sn, they are joined into a multi-line string "s1<NEWLINE>...<NEWLINE>sn", and this string is assigned to %<var>. If no value is returned, %<var> is set to Perl undefined value. If eval() fails, the value of %<var> does not change. Since most Perl programs contain semicolons which are also employed by SEC as action separators, it is recommended to enclose the <code> parameter in parentheses, in order to mask the semicolons in <code>. For additional information, see PERL INTEGRATION section. For example, consider the following action list definition: action=assign %div Division error; eval %div ( $1 / $2 ) The assign action sets the %div action list variable to the string "Division error", while the eval action substitutes the values of $1 and $2 match variables into the string "$1 / $2". Resulting string is treated as Perl code which is first compiled and then executed. For instance, if the values of $1 and $2 are 12 and 4, respectively, the following Perl code is compiled: 12 / 4. Since executing this code yields 3, the eval action assigns this value to the %div action list variable. Also, if $2 has no value or its value is 0, resulting code leads to compilation or execution error, and %div retains its previous value "Division error". call %<var> %<ref> [<paramlist>] Call the precompiled Perl function referenced by the action list variable %<ref>, and assign the result to the action list variable %<var>. The %<ref> parameter must be a code reference that has been previously created with the eval action. The <paramlist> parameter (if given) is a string which specifies parameters for the function. The parameters must be separated by whitespace in the <paramlist> string. If the function returns a single value, it is assigned to %<var>. If the function returns several values s1,...,sn, they are joined into a multi-line string "s1<NEWLINE>...<NEWLINE>sn", and this string is assigned to %<var>. If no value is returned, %<var> is set to Perl undefined value. If the function encounters a fatal runtime error or %<ref> is not a code reference, the value of %<var> does not change. For additional information, see PERL INTEGRATION section. For example, consider the following action list definition: action=eval %func ( sub { return $_[0] + $_[1] } ); \ call %sum %func $1 $2 Since the Perl code provided to eval action is a definition of an anonymous function, its compilation yields a code reference which gets assigned to the %func action list variable (the function returns the sum of its two input parameters). The call action will invoke previously compiled function, using the values of $1 and $2 match variables as function parameters, and assigning function return value to the %sum action list variable. Therefore, if the values of $1 and $2 are 2 and 3, respectively, %sum is set to 5. lcall %<var> [<paramlist>] -> <code> lcall %<var> [<paramlist>] :> <code> Call the precompiled Perl function <code> and assign the result to the action list variable %<var>. The <code> parameter must be a valid Perl anonymous function definition that is compiled at SEC startup with the Perl eval() function, and eval() must return a code reference. The <paramlist> parameter (if given) is a string which specifies parameters for the function. The parameters must be separated by whitespace in the <paramlist> string. If <paramlist> is followed by -> operator, parameters are passed to function as Perl scalar values. If <paramlist> is followed by :> operator, it is assumed that each parameter is a name of an entry in the pattern match cache. If an entry with the given name does not exist, Perl undefined value is passed to the function. If an entry with the given name exists, a reference to the entry is passed to the function. Internally, each pattern match cache entry is implemented as a Perl hash which contains all match variables for the given entry. If the function returns a single value, it is assigned to %<var>. If the function returns several values s1,...,sn, they are joined into a multi-line string "s1<NEWLINE>...<NEWLINE>sn", and this string is assigned to %<var>. If no value is returned, %<var> is set to Perl undefined value. If the function encounters a fatal runtime error, the value of %<var> does not change. Since most Perl functions contain semicolons which are also employed by SEC as action separators, it is recommended to enclose the <code> parameter in parentheses, in order to mask the semicolons in <code>. For additional information, see PERL INTEGRATION section. For example, consider the following action list definition: action=lcall %len $1 -> ( sub { return length($_[0]) } ) The above lcall action will take the value of the $1 match variable and find its length in characters, assigning the length to the %len action list variable. Note that the function for finding the length is compiled when SEC loads its configuration, and all invocations of lcall will execute already compiled code. As another example, consider the following action list definition: action=lcall %o SSH :> ( sub { $_[0]->{"failure"} = 1 } ) The above lcall action will assign 1 to the $+{failure} match variable that has been cached under the SSH entry in the pattern match cache (variable will be created if it did not exist previously). rewrite <lnum> [<string>] Replace last <lnum> lines in the input buffer with string <string>. If the --nojointbuf option was specified and the action is triggered by a matching event, the action modifies the buffer which holds this event. If the --nojointbuf option was specified and the action is triggered by the system clock (e.g., the action is executed from the Calendar rule), the action modifies the buffer which holds the last already processed event. With the --jointbuf option, the content of the joint input buffer is rewritten. The <lnum> parameter must evaluate to an unsigned integer at runtime. If <lnum> evaluates to 0, <lnum> is reset to the number of lines in <string>. If the value of <lnum> is greater than the buffer size N, <lnum> is reset to N. If <string> contains less than <lnum> lines, <string> will be padded with leading empty lines. If <string> contains more than <lnum> lines, only leading <lnum> lines from <string> are written into the buffer. Default value for <string> is %s. For additional information, see INPUT PROCESSING AND TIMING section. addinput <filename> [<offset> [<name>] ] File <filename> is added to the list of input files and opened, so that processing starts from file offset <offset>. The <offset> parameter must evaluate to unsigned integer or - (EOF) at runtime. If <offset> is not specified, it defaults to - (i.e., processing starts from the end of file). If opening the file fails (e.g., the file does not exist), it will stay in the list of input files (e.g., with the --reopen-timeout command line option, SEC will attempt to reopen the file). The <name> parameter defines the internal context which should be used for <filename> if the --intcontexts command line option is given (if <name> is omitted but --intcontexts command line option is present, default internal context will be used). See INTERNAL EVENTS AND CONTEXTS section for more information. For example, consider the following action list definition: action=addinput /var/log/test-%{.year}%{.mon}%{.mday} 0 TESTFILE The above addinput action adds the file /var/log/test-YYYYMMDD to the list of input files, where YYYYMMDD reflects the current date. The addinput action will also attempt to open the file, and if open succeeds, file will be processed from the beginning. Also, the internal context TESTFILE will be used for all events read from the file. dropinput <filename> File <filename> is dropped from the list of input files and closed (if currently open). Note that dropinput action can only be used for input files which have been previously set up with addinput action. sigemul <signal> Emulates the arrival of signal <signal> and triggers its handler. The <signal> parameter must evaluate to one of the following strings: HUP, ABRT, USR1, USR2, INT, or TERM. For example, the action sigemul USR1 triggers the generation of SEC dump file. See the SIGNALS section for detailed information on signals that are handled by SEC. varset %<var> <entry> If the pattern match cache entry <entry> exists, set the action list variable %<var> to 1, otherwise set %<var> to 0. For example, if pattern match cache contains the entry with the name SSH but not the entry with the name NTP, the action varset %ssh SSH will set the %ssh action list variable to 1, while the action varset %ntp NTP will set the %ntp action list variable to 0. if %<var> ( <action list> ) [ else ( <action list2> ) ] If the action list variable %<var> evaluates true in the Perl boolean context (i.e., it holds a defined value which is neither 0 nor empty string), execute the action list <action list>. If the second action list <action list2> is given with the optional else-statement, it is executed if %<var> either does not exist or evaluates false (i.e., %<var> holds 0, empty string or Perl undefined value). For example, consider the following action list definition: action=exists %present REPORT; if %present \ ( report REPORT /bin/mail root@localhost; delete REPORT ) \ else ( logonly Nothing to report ) If the REPORT context exists, its event store is mailed to root@localhost and the context is deleted, otherwise the message "Nothing to report" is logged. while %<var> ( <action list> ) Execute the action list <action list> repeatedly as long as the action list variable %<var> evaluates true in the Perl boolean context (i.e., it holds a defined value which is neither 0 nor empty string). For example, consider the following action list definition: action=create REVERSE; getsize %n TEST; \ while %n ( pop TEST %e; add REVERSE %e; getsize %n TEST ); \ copy REVERSE %events; fill TEST %events This action list reverses the order of strings in the event store of the context TEST, using the context REVERSE as a temporary storage. During each iteration of the while-loop, the last string in the event store of TEST is removed with the pop action and appended to the event store of REVERSE with the add action. The loop terminates when all strings have been removed from the event store of TEST (i.e., the getsize action reports 0 for event store size). Finally, the event store of REVERSE is assigned to the %events action list variable with the copy action, and the fill action is used for overwriting the event store of TEST with the value of %events. break If used inside a while-loop, terminates its execution; otherwise terminates the execution of the entire action list. continue If used inside a while-loop, starts the next iteration of the loop; otherwise terminates the execution of the entire action list. Examples: Follow the /var/log/trapd.log file and feed to SEC input all lines that are appended to the file: action=spawn /bin/tail -f /var/log/trapd.log Mail the timestamp and the value of the $0 variable to the local root: action=pipe '%t: $0' /bin/mail -s "alert message" root@localhost Add the value of the $0 variable to the event store of the context ftp_<the value of $1>, and set the context to expire after 30 minutes. When the context expires, its event store will be mailed to the local root: action=add ftp_$1 $0; \ set ftp_$1 1800 (report ftp_$1 /bin/mail root@localhost) Create a subroutine for weeding out comment lines from the input list, and use this subroutine for removing comment lines from the event store of the context C1: action=eval %funcptr ( sub { my(@buf) = split(/\n/, $_[0]); \ my(@ret) = grep(!/^#/, @buf); return @ret; } ); \ copy C1 %in; call %out %funcptr %in; fill C1 %out The following action list achieves the same goal as the previous action list with while and if actions: action=getsize %size C1; while %size ( shift C1 %event; \ lcall %nocomment %event -> ( sub { $_[0] !~ /^#/ } ); \ if %nocomment ( add C1 %event ); \ lcall %size %size -> ( sub { $_[0]-1; } ) )
PARSING ISSUES
As already noted, SEC context expressions and action lists may contain parentheses which are used for grouping and masking purposes. When SEC parses its configuration, it checks whether parentheses in context expressions and action lists are balanced (i.e., whether each parenthesis has a counterpart), since unbalanced parentheses introduce ambiguity. This can cause SEC to reject some legitimate constructs, e.g., action=eval %o (print ")";) is considered an invalid action list (however, note that action=eval %o (print "()";) would be passed by SEC, since now parentheses are balanced). In order to avoid such parsing errors, each parenthesis without a counterpart must be masked with a backslash (the backslash will be removed by SEC during configuration file parsing). For example, the above action could be written as action=eval %o (print "\)";)
RULE TYPES
This section provides a detailed discussion of SEC rule types. SINGLE RULE The Single rule supports the following fields. Note that match variables may be used in context, desc, and action fields. type fixed to Single (value is case insensitive, so single or sIngLe can be used instead). continue (optional) TakeNext, DontCont, EndMatch or GoTo <label> (apart from <label>, values are case insensitive). ptype pattern type (value is case insensitive). pattern pattern. varmap (optional) variable map. context (optional) context expression. desc operation description string. action action list. rem (optional, may appear more than once) remarks and comments. The Single rule immediately executes an action list when an event has matched the rule. An event matches the rule if the pattern matches the event and the context expression (if given) evaluates TRUE. Note that the Single rule does not start event correlation operations, and the desc field is merely used for setting the %s action list variable. Examples: type=single continue=takenext ptype=regexp pattern=ftpd\[(\d+)\]: \S+ \(ristov2.*FTP session opened desc=ftp session opened for ristov2 pid $1 action=create ftp_$1 type=single continue=takenext ptype=regexp pattern=ftpd\[(\d+)\]: context=ftp_$1 desc=ftp session event for ristov2 pid $1 action=add ftp_$1 $0; set ftp_$1 1800 \ (report ftp_$1 /bin/mail root@localhost) type=single ptype=regexp pattern=ftpd\[(\d+)\]: \S+ \(ristov2.*FTP session closed desc=ftp session closed for ristov2 pid $1 action=report ftp_$1 /bin/mail root@localhost; \ delete ftp_$1 This ruleset is created for monitoring the ftpd log file. The first rule creates the context ftp_<pid> when someone connects from host ristov2 over FTP and establishes a new ftp session (the session is identified by the PID of the process which has been created for handling this session). The second rule adds all further log file lines for the session <pid> to the event store of the context ftp_<pid> (before adding a line, the rule checks if the context exists). After adding a line, the rule extends context's lifetime for 30 minutes and sets the action list that will be executed when the context expires. The third rule mails collected log file lines to root@localhost when the session <pid> is closed. Collected lines will also be mailed when the session <pid> has been inactive for 30 minutes (no log file lines observed for that session). Note that the log file line that has matched the first rule is also matched against the second rule (since the first rule has the continue field set to TakeNext). Since the second rule always matches this line, it will become the first line in the event store of ftp_<pid>. The second rule has also its continue field set to TakeNext, since otherwise no log file lines would reach the third rule. SINGLEWITHSCRIPT RULE The SingleWithScript rule supports the following fields. Note that match variables may be used in context, script, desc, action, and action2 fields. type fixed to SingleWithScript (value is case insensitive). continue (optional) TakeNext, DontCont, EndMatch or GoTo <label> (apart from <label>, values are case insensitive). ptype pattern type (value is case insensitive). pattern pattern. varmap (optional) variable map. context (optional) context expression. script command line of external program. shell (optional) Yes or No (values are case insensitive, default is Yes). desc operation description string. action action list. action2 (optional) action list. rem (optional, may appear more than once) remarks and comments. The SingleWithScript rule forks a process for executing an external program when an event has matched the rule. The command line of the external program is defined by the script field. If the shell field is set to Yes (this is the default), the command line of the external program will be parsed by shell if the command line contains shell metacharacters. If the shell field is set to No, command line is not parsed by shell, but split into arguments by using whitespace as a separator, and passed to execvp(3) for execution. Note that splitting into arguments is done when command line is loaded from the configuration file and parsed, not at runtime (e.g., if command line is /usr/local/bin/mytool $1 $2, the values of $1 and $2 variables are regarded as single arguments even if the values contain whitespace). The names of all currently existing contexts are written to the standard input of the program. After the program has been forked, the rule matching continues immediately, and the program status will be checked periodically until the program exits. If the program returns 0 exit status, the action list defined by the action field is executed; otherwise the action list defined by the action2 field is executed (if given). Note that the SingleWithScript rule does not start event correlation operations, and the desc field is merely used for setting the %s action list variable. Examples: type=SingleWithScript ptype=RegExp pattern=interface ([\d.]+) down script=/bin/ping -c 3 -q $1 desc=Check if $1 responds to ping action=logonly Interface $1 reported down, but is pingable action2=pipe '%t: Interface $1 is down' /bin/mail root@localhost When "interface <ipaddress> down" line appears in input, the rule checks if <ipaddress> responds to ping. If <ipaddress> is pingable, the message "Interface <ipaddress> reported down, but is pingable" is logged; otherwise an e-mail warning containing a human-readable timestamp is sent to root@localhost. SINGLEWITHSUPPRESS RULE The SingleWithSuppress rule supports the following fields. Note that match variables may be used in context, desc, and action fields. type fixed to SingleWithSuppress (value is case insensitive). continue (optional) TakeNext, DontCont, EndMatch or GoTo <label> (apart from <label>, values are case insensitive). ptype pattern type (value is case insensitive). pattern pattern. varmap (optional) variable map. context (optional) context expression. desc operation description string. action action list. window event correlation window size (value is an integer constant). rem (optional, may appear more than once) remarks and comments. The SingleWithSuppress rule runs event correlation operations for filtering repeated instances of the same event during T seconds. The value of T is defined by the window field. When an event has matched the rule, SEC evaluates the operation description string given with the desc field. If the operation for the given string and rule does not exist, SEC will create it with the lifetime of T seconds, and the operation immediately executes an action list. If the operation exists, it consumes the matching event without any action. Examples: type=SingleWithSuppress ptype=RegExp pattern=(\S+): [fF]ile system full desc=File system $1 full action=pipe '%t: %s' /bin/mail root@localhost window=900 This rule runs event correlation operations for processing "file system full" syslog messages, e.g., Dec 16 14:26:09 test ufs: [ID 845546 kern.notice] NOTICE: alloc: /var: file system full When the first message for a file system is observed, an operation is created which sends an e-mail warning about this file system to root@localhost. The operation will then run for 900 seconds and silently consume further messages for the *same* file system. However, if a message for a different file system is observed, another operation will be started which sends a warning to root@localhost again (since the desc field contains the $1 match variable which evaluates to the file system name). PAIR RULE The Pair rule supports the following fields. Note that match variables may be used in context, desc, action, pattern2, context2, desc2, and action2 fields. type fixed to Pair (value is case insensitive). continue (optional) TakeNext, DontCont, EndMatch or GoTo <label> (apart from <label>, values are case insensitive). Specifies the point-of-continue after a match by pattern and context. ptype pattern type for pattern (value is case insensitive). pattern pattern. varmap (optional) variable map for pattern. context (optional) context expression, evaluated together with pattern. desc operation description string. action action list. continue2 (optional) TakeNext, DontCont, EndMatch or GoTo <label> (apart from <label>, values are case insensitive). Specifies the point-of-continue after a match by pattern2 and context2. ptype2 pattern type for pattern2 (value is case insensitive). pattern2 pattern. varmap2 (optional) variable map for pattern2. context2 (optional) context expression, evaluated together with pattern2. desc2 format string that sets the %s variable for action2. action2 action list. window (optional) event correlation window size (value is an integer constant). rem (optional, may appear more than once) remarks and comments. The Pair rule runs event correlation operations for processing event pairs during T seconds. The value of T is defined by the window field. Default value is 0 which means infinity. When an event has matched the conditions defined by the pattern and context field, SEC evaluates the operation description string given with the desc field. If the operation for the given string and rule exists, it consumes the matching event without any action. If the operation does not exist, SEC will create it with the lifetime of T seconds, and the operation immediately executes an action list defined by the action field. SEC will also copy the match conditions given with the pattern2 and context2 field into the operation, and substitute match variables with their values in copied conditions. If the event does not match conditions defined by the pattern and context field, SEC will check the match conditions of all operations started by the given rule. Each matching operation executes the action list given with the action2 field and finishes. If match variables are set when the operation matches an event, they are made available as $-prefixed match variables in context2, desc2, and action2 fields of the rule definition. For example, if pattern2 field is a regular expression, then $1 in the desc2 field is set by pattern2. In order to access match variables set by pattern, %-prefixed match variables have to be used in context2, desc2, and action2 fields. For example, if pattern and pattern2 are regular expressions, then %1 in the desc2 field refers to the value set by the first capture group in pattern (i.e., it has the same value as $1 in the desc field). Examples: type=Pair ptype=RegExp pattern=kernel: nfs: server (\S+) not responding, still trying desc=Server $1 is not responding action=pipe '%t: %s' /bin/mail root@localhost ptype2=SubStr pattern2=kernel: nfs: server $1 OK desc2=Server $1 is responding again action2=logonly window=3600 This rule runs event correlation operations for processing NFS "server not responding" and "server OK" syslog messages, e.g., Dec 18 22:39:48 test kernel: nfs: server box1 not responding, still trying Dec 18 22:42:27 test kernel: nfs: server box1 OK When the "server not responding" message for an NFS server is observed, an operation is created for this server which sends an e-mail warning about the server to root@localhost. The operation will then run for 3600 seconds and silently consume further "server not responding" messages for the same server. If this operation observes "server OK" message for the *same* server, it will log the message "Server <servername> is responding again" and finish. For example, if SEC observes the following event at 22:39:48 Dec 18 22:39:48 test kernel: nfs: server box1 not responding, still trying an event correlation operation is created for server box1 which issues an e-mail warning about this server immediately. After that, the operation will run for 3600 seconds (until 23:39:48), waiting for an event which would contain the substring "kernel: nfs: server box1 OK" (because the pattern2 field contains the $1 match variable which evaluates to the server name). If any further error messages appear for server box1 during the 3600 second lifetime of the operation, e.g., Dec 18 22:40:28 test kernel: nfs: server box1 not responding, still trying Dec 18 22:41:09 test kernel: nfs: server box1 not responding, still trying these messages will be silently consumed by the operation. If before its expiration the operation observes an event which contains the substring "kernel: nfs: server box1 OK", e.g., Dec 18 22:42:27 test kernel: nfs: server box1 OK the operation will log the message "Server box1 is responding again" and terminate immediately. If no such message appears during the 3600 second lifetime of the operation, the operation will expire without taking any action. Please note that if the window field would be either removed from the rule definition or set to 0, the operation would never silently expire, but would terminate only after observing an event which contains the substring "kernel: nfs: server box1 OK". If the above rule is modified in the following way type=Pair ptype=RegExp pattern=^([[:alnum:]: ]+) \S+ kernel: nfs: server (\S+) not responding, still trying desc=Server $2 is not responding action=logonly ptype2=RegExp pattern2=^([[:alnum:]: ]+) \S+ kernel: nfs: server $2 OK desc2=Server %2 was not accessible from %1 to $1 action2=pipe '%s' /bin/mail root@localhost window=86400 this rule will run event correlation operations which report NFS server downtime to root@localhost via e-mail, provided that downtime does not exceed 24 hours (86400 seconds). For example, if SEC observes the following event Dec 18 23:01:17 test kernel: nfs: server box.test not responding, still trying then the rule matches this event, sets $1 match variable to "Dec 18 23:01:17" and $2 to "box.test", and creates an event correlation operation for server box.test. This operation will start its work by logging the message "Server box.test is not responding", and will then run for 86400 seconds, waiting for an event which would match the regular expression ^([[:alnum:]: ]+) \S+ kernel: nfs: server box\.test OK Note that this expression was created from the regular expression template in the pattern2 field by substituting the match variable $2 with its value. However, since the string "box.test" contains the dot (.) character which is a regular expression metacharacter, the dot is masked with the backslash in the regular expression. Suppose SEC will then observe the event Dec 18 23:09:54 test kernel: nfs: server box.test OK This event matches the above regular expression which is used by the operation running for server box.test. Also, the regular expression match sets the $1 variable to "Dec 18 23:09:54" and unsets the $2 variable. In order to refer to their original values when the operation was created, %1 and %2 match variables have to be used in the desc2 field (%1 equals to "Dec 18 23:01:17" and %2 equals to "box.test"). Therefore, the operation will send the e-mail message "Server box.test was not accessible from Dec 18 23:01:17 to Dec 18 23:09:54" to root@localhost, and will terminate immediately. PAIRWITHWINDOW RULE The PairWithWindow rule supports the following fields. Note that match variables may be used in context, desc, action, pattern2, context2, desc2, and action2 fields. type fixed to PairWithWindow (value is case insensitive). continue (optional) TakeNext, DontCont, EndMatch or GoTo <label> (apart from <label>, values are case insensitive). Specifies the point-of-continue after a match by pattern and context. ptype pattern type for pattern (value is case insensitive). pattern pattern. varmap (optional) variable map for pattern. context (optional) context expression, evaluated together with pattern. desc operation description string. action action list. continue2 (optional) TakeNext, DontCont, EndMatch or GoTo <label> (apart from <label>, values are case insensitive). Specifies the point-of-continue after a match by pattern2 and context2. ptype2 pattern type for pattern2 (value is case insensitive). pattern2 pattern. varmap2 (optional) variable map for pattern2. context2 (optional) context expression, evaluated together with pattern2. desc2 format string that sets the %s variable for action2. action2 action list. window event correlation window size (value is an integer constant). rem (optional, may appear more than once) remarks and comments. The PairWithWindow rule runs event correlation operations for processing event pairs during T seconds. The value of T is defined by the window field. When an event has matched the conditions defined by the pattern and context field, SEC evaluates the operation description string given with the desc field. If the operation for the given string and rule exists, it consumes the matching event without any action. If the operation does not exist, SEC will create it with the lifetime of T seconds. SEC will also copy the match conditions given with the pattern2 and context2 field into the operation, and substitute match variables with their values in copied conditions. If the event does not match conditions defined by the pattern and context field, SEC will check the match conditions of all operations started by the given rule. Each matching operation executes the action list given with the action2 field and finishes. If the operation has not observed a matching event by the end of its lifetime, it executes the action list given with the action field before finishing. If match variables are set when the operation matches an event, they are made available as $-prefixed match variables in context2, desc2, and action2 fields of the rule definition. For example, if pattern2 field is a regular expression, then $1 in the desc2 field is set by pattern2. In order to access match variables set by pattern, %-prefixed match variables have to be used in context2, desc2, and action2 fields. For example, if pattern and pattern2 are regular expressions, then %1 in the desc2 field refers to the value set by the first capture group in pattern (i.e., it has the same value as $1 in the desc field). Examples: type=PairWithWindow ptype=RegExp pattern=sshd\[\d+\]: Failed .+ for (\S+) from ([\d.]+) port \d+ ssh2 desc=User $1 has been unable to log in from $2 over SSH during 1 minute action=pipe '%t: %s' /bin/mail root@localhost ptype2=RegExp pattern2=sshd\[\d+\]: Accepted .+ for $1 from $2 port \d+ ssh2 desc2=SSH login successful for %1 from %2 after initial failure action2=logonly window=60 This rule runs event correlation operations for processing SSH login events, e.g., Dec 27 19:00:24 test sshd[10526]: Failed password for risto from 10.1.2.7 port 52622 ssh2 Dec 27 19:00:27 test sshd[10526]: Accepted password for risto from 10.1.2.7 port 52622 ssh2 When an SSH login failure is observed for a user name and a source IP address, an operation is created for this user name and IP address combination which will expect a successful login for the *same* user name and *same* IP address during 60 seconds. If the user will not log in from the same IP address during 60 seconds, the operation will send an e-mail warning to root@localhost before finishing, otherwise it will log the message "SSH login successful for <username> from <ipaddress> after initial failure" and finish. Suppose the following events are generated by an SSH daemon, and each event timestamp reflects the time SEC observes the event: Dec 30 13:02:01 test sshd[30517]: Failed password for risto from 10.1.2.7 port 42172 ssh2 Dec 30 13:02:30 test sshd[30810]: Failed password for root from 192.168.1.104 port 46125 ssh2 Dec 30 13:02:37 test sshd[30517]: Failed password for risto from 10.1.2.7 port 42172 ssh2 Dec 30 13:02:59 test sshd[30810]: Failed password for root from 192.168.1.104 port 46125 ssh2 Dec 30 13:03:04 test sshd[30810]: Accepted password for root from 192.168.1.104 port 46125 ssh2 When the first event is observed at 13:02:01, an operation is started for user risto and IP address 10.1.2.7 which will expect a successful login for risto from 10.1.2.7. The operation will run for 60 seconds, waiting for an event which would match the regular expression sshd\[\d+\]: Accepted .+ for risto from 10\.1\.2\.7 port \d+ ssh2 Note that this expression was created from the regular expression template in the pattern2 field by substituting match variables $1 and $2 with their values. However, since the value of $2 contains the dot (.) characters which are regular expression metacharacters, each dot is masked with the backslash in the regular expression. When the second event is observed at 13:02:30, another operation is started for user root and IP address 192.168.1.104 which will expect root to log in successfully from 192.168.1.104. This operation will run for 60 seconds, waiting for an event matching the regular expression sshd\[\d+\]: Accepted .+ for root from 192\.168\.1\.104 port \d+ ssh2 The third event at 13:02:37 represents a second login failure for user risto and IP address 10.1.2.7, and is silently consumed by the first operation. Likewise, the fourth event at 13:02:59 is silently consumed by the second operation. The first operation will run until 13:03:01 and then expire without seeing a successful login for risto from 10.1.2.7. Before terminating, the operation will send an e-mail warning to root@localhost that user risto has not managed to log in from 10.1.2.7 during one minute. At 13:03:04, the second operation will observe an event which matches its regular expression sshd\[\d+\]: Accepted .+ for root from 192\.168\.1\.104 port \d+ ssh2 After seeing this event, the operation will log the message "SSH login successful for root from 192.168.1.104 after initial failure" and terminate immediately. Please note that the match by the regular expression sshd\[\d+\]: Accepted .+ for root from 192\.168\.1\.104 port \d+ ssh2 sets the $1 match variable to 1 and unsets $2. Therefore, the %1 and %2 match variables have to be used in the desc2 field, in order to refer to the original values of $1 (root) and $2 (192.168.1.104) when the operation was created. SINGLEWITHTHRESHOLD RULE The SingleWithThreshold rule supports the following fields. Note that match variables may be used in context, desc, action, and action2 fields. type fixed to SingleWithThreshold (value is case insensitive). continue (optional) TakeNext, DontCont, EndMatch or GoTo <label> (apart from <label>, values are case insensitive). ptype pattern type (value is case insensitive). pattern pattern. varmap (optional) variable map. context (optional) context expression. desc operation description string. action action list. action2 (optional) action list. window event correlation window size (value is an integer constant). thresh counting threshold (value is an integer constant). rem (optional, may appear more than once) remarks and comments. The SingleWithThreshold rule runs event correlation operations for counting repeated instances of the same event during T seconds, and taking an action if N events are observed. The values of T and N are defined by the window and thresh field, respectively. When an event has matched the rule, SEC evaluates the operation description string given with the desc field. If the operation for the given string and rule does not exist, SEC will create it with the lifetime of T seconds. The operation will memorize the occurrence time of the event (current time as returned by the time(2) system call), and compare the number of memorized occurrence times with the threshold N. If the operation has observed N events, it executes the action list defined by the action field, and consumes all further matching events without any action. If the rule has an optional action list defined with the action2 field, the operation will execute it before finishing, provided that the action list given with action has been previously executed by the operation. Note that a sliding window is employed for event counting -- if the operation has observed less than N events by the end of its lifetime, it drops occurrence times which are older than T seconds, and extends its lifetime for T seconds from the earliest remaining occurrence time. If there are no remaining occurrence times, the operation finishes without executing an action list. Examples: type=SingleWithThreshold ptype=RegExp pattern=sshd\[\d+\]: Failed .+ for (\S+) from [\d.]+ port \d+ ssh2 desc=Three SSH login failures within 1m for user $1 action=pipe '%t: %s' /bin/mail root@localhost window=60 thresh=3 This rule runs event correlation operations for counting the number of SSH login failure events. Each operation counts events for one user name, and if the operation has observed three login failures within 60 seconds, it sends an e-mail warning to root@localhost. Suppose the following events are generated by an SSH daemon, and each event timestamp reflects the time SEC observes the event: Dec 28 01:42:21 test sshd[28132]: Failed password for risto from 10.1.2.7 port 42172 ssh2 Dec 28 01:43:10 test sshd[28132]: Failed password for risto from 10.1.2.7 port 42172 ssh2 Dec 28 01:43:29 test sshd[28132]: Failed password for risto from 10.1.2.7 port 42172 ssh2 Dec 28 01:44:00 test sshd[28149]: Failed password for risto2 from 10.1.2.7 port 42176 ssh2 Dec 28 01:44:03 test sshd[28211]: Failed password for risto from 10.1.2.7 port 42192 ssh2 Dec 28 01:44:07 test sshd[28211]: Failed password for risto from 10.1.2.7 port 42192 ssh2 When the first event is observed at 01:42:21, a counting operation is started for user risto, with its event correlation window ending at 01:43:21. Since by 01:43:21 two SSH login failures for user risto have occurred, the threshold condition remains unsatisfied for the operation. Therefore, the beginning of its event correlation window will be moved to 01:43:10 (the occurrence time of the second event), leaving the first event outside the window. At 01:44:00, another counting operation is started for user risto2. The threshold condition for the first operation will become satisfied at 01:44:03 (since the operation has seen three login failure events for user risto within 60 seconds), and thus an e-mail warning will be issued. Finally, the event occurring at 01:44:07 will be consumed silently by the first operation (the operation will run until 01:44:10). Since there will be no further login failure events for user risto2, the second operation will exist until 01:45:00 without taking any action. SINGLEWITH2THRESHOLDS RULE The SingleWith2Thresholds rule supports the following fields. Note that match variables may be used in context, desc, action, desc2, and action2 fields. type fixed to SingleWith2Thresholds (value is case insensitive). continue (optional) TakeNext, DontCont, EndMatch or GoTo <label> (apart from <label>, values are case insensitive). ptype pattern type (value is case insensitive). pattern pattern. varmap (optional) variable map. context (optional) context expression. desc operation description string. action action list. window event correlation window size (value is an integer constant). thresh counting threshold. desc2 format string that sets the %s variable for action2. action2 action list. window2 event correlation window size (value is an integer constant). thresh2 counting threshold. rem (optional, may appear more than once) remarks and comments. The SingleWith2Thresholds rule runs event correlation operations which take action if N1 events have been observed in the window of T1 seconds, and then at most N2 events will be observed in the window of T2 seconds. The values of T1, N1, T2, and N2 are defined by the window, thresh, window2, and thresh2 field, respectively. When an event has matched the rule, SEC evaluates the operation description string given with the desc field. If the operation for the given string and rule does not exist, SEC will create it with the lifetime of T1 seconds. The operation will memorize the occurrence time of the event (current time as returned by the time(2) system call), and compare the number of memorized occurrence times with the threshold N1. If the operation has observed N1 events, it executes the action list defined by the action field, and starts another counting round for T2 seconds. If no more than N2 events have been observed by the end of the window, the operation executes the action list defined by the action2 field and finishes. Note that both windows are sliding -- the first window slides like the window of the SingleWithThreshold operation, while the beginning of the second window is moved to the second earliest memorized event occurrence time when the threshold N2 is violated. Examples: type=SingleWith2Thresholds ptype=RegExp pattern=(\S+): %SYS-3-CPUHOG desc=Router $1 CPU overload action=pipe '%t: %s' /bin/mail root@localhost window=300 thresh=2 desc2=Router $1 CPU load has been normal for 1h action2=logonly window2=3600 thresh2=0 When a SYS-3-CPUHOG syslog message is received from a router, the rule starts a counting operation for this router which sends an e-mail warning to root@localhost if another such message is received from the same router within 300 seconds. After sending the warning, the operation will continue to run until no SYS-3-CPUHOG syslog messages have been received from the router for 3600 seconds. When this condition becomes satisfied, the operation will log the message "Router <routername> CPU load has been normal for 1h" and finish. Suppose the following events are generated by a router, and each event timestamp reflects the time SEC observes the event: Dec 30 12:23:25 router1.mydomain Router1: %SYS-3-CPUHOG: cpu is hogged Dec 30 12:25:38 router1.mydomain Router1: %SYS-3-CPUHOG: cpu is hogged Dec 30 12:28:53 router1.mydomain Router1: %SYS-3-CPUHOG: cpu is hogged When the first event is observed at 12:23:25, a counting operation is started for router Router1. The appearance of the second event at 12:25:38 fulfills the threshold condition given with the thresh and window fields (two events have been observed within 300 seconds). Therefore, the operation will send an e-mail warning about the CPU overload of Router1 to root@localhost. After that, the operation will start another counting round, expecting to see no SYS-3-CPUHOG events (since thresh2=0) for Router1 during the following 3600 seconds (the beginning of the operation's event correlation window will be moved to 12:25:38 for the second counting round). Since the appearance of the third event at 12:28:53 violates the threshold condition given with the thresh2 and window2 fields, the beginning of the event correlation window will be moved to 12:28:53. Since there will be no further SYS-3-CPUHOG messages for Router1, the operation will run until 13:28:53 and then expire, logging the message "Router Router1 CPU load has been normal for 1h" before finishing. EVENTGROUP RULE The EventGroup rule supports the following fields. Note that match variables may be used in context*, count*, desc, action, init, end, and slide fields. type EventGroup[N] (value is case insensitive, N defaults to 1). continue (optional) TakeNext, DontCont, EndMatch or GoTo <label> (apart from <label>, values are case insensitive). Specifies the point-of-continue after a match by pattern and context. ptype pattern type for pattern (value is case insensitive). pattern pattern. varmap (optional) variable map for pattern. context (optional) context expression, evaluated together with pattern. count (optional) action list for execution after a match by pattern and context. thresh (optional) counting threshold for events matched by pattern and context (value is an integer constant, default is 1). ... continueN (optional) TakeNext, DontCont, EndMatch or GoTo <label> (apart from <label>, values are case insensitive). Specifies the point-of-continue after a match by patternN and contextN. ptypeN pattern type for patternN (value is case insensitive). patternN pattern. varmapN (optional) variable map for patternN. contextN (optional) context expression, evaluated together with patternN. countN (optional) action list for execution after a match by patternN and contextN. threshN (optional) counting threshold for events matched by patternN and contextN (value is an integer constant, default is 1). desc operation description string. action action list. init (optional) action list. end (optional) action list. slide (optional) action list. multact (optional) Yes or No (values are case insensitive, default is No). egptype (optional) SubStr, NSubStr, RegExp, NRegExp, PerlFunc or NPerlFunc (values are case insensitive). Specifies the pattern type for egpattern. egpattern (optional) event group pattern. window event correlation window size (value is an integer constant). rem (optional, may appear more than once) remarks and comments. The EventGroup rule runs event correlation operations for counting repeated instances of N different events e1,...,eN during T seconds, and taking an action if threshold conditions c1,...,cN for *all* events are satisfied (i.e., for each event eK there are at least cK event instances in the window). Note that the event correlation window of the EventGroup operation is sliding like the window of the SingleWithThreshold operation. Event e1 is described with the pattern and context field, event e2 is described with the pattern2 and context2 field, etc. The values for N and T are defined by the type and window field, respectively. The value for c1 is given with the thresh field, the value for c2 is given with the thresh2 field, etc. Values for N and c1,...,cN default to 1. In order to match an event with the rule, pattern and context fields are evaluated first. If they don't match the event, then pattern2 and context2 are evaluated, etc. If all N conditions are tried without a success, the event doesn't match the rule. When an event has matched the rule, SEC evaluates the operation description string given with the desc field. If the operation for the given string and rule does not exist, SEC will create it with the lifetime of T seconds. The operation will memorize the occurrence time of the event (current time as returned by the time(2) system call), and compare the number of memorized occurrence times for each eK with the threshold cK (i.e., the number of observed instances of eK is compared with the threshold cK). If all threshold conditions are satisfied, the operation executes the action list defined by the action field, and consumes all further matching events without re-executing the action list if the multact field is set to No (this is the default). However, if multact is set to Yes, the operation will re-evaluate the threshold conditions on every further matching event, re-executing the action list given with the action field if all conditions are satisfied, and sliding the event correlation window forward when the window is about to expire (if no events remain in the window, the operation will finish). For example, consider the following rule: type=EventGroup2 ptype=SubStr pattern=EVENT_A thresh=2 ptype2=SubStr pattern2=EVENT_B thresh2=2 desc=Sequence of two or more As and Bs observed within 60 seconds action=write - %s window=60 Also, suppose the following events occur, and each event timestamp reflects the time SEC observes the event: Mar 10 12:03:01 EVENT_A Mar 10 12:03:04 EVENT_B Mar 10 12:03:10 EVENT_A Mar 10 12:03:11 EVENT_A Mar 10 12:03:27 EVENT_B Mar 10 12:03:46 EVENT_A Mar 10 12:03:59 EVENT_A When these events are observed by the above EventGroup2 rule, the rule starts an event correlation operation at 12:03:01. Note that although the first threshold condition thresh=2 is satisfied when the third event appears at 12:03:10, the second threshold condition thresh2=2 is not met, and therefore the operation will not execute the action list given with the action field. When the fifth event appears at 12:03:27, all threshold conditions are finally satisfied, and the operation will write the string "Sequence of two or more As and Bs observed within 60 seconds" to standard output with the write action. Finally, the events occurring at 12:03:46 and 12:03:59 will be consumed silently by the operation (the operation will run until 12:04:01). If multact=yes statement is added to the above EventGroup2 rule, the operation would execute the write action not only at 12:03:27, but also at 12:03:46 and 12:03:59, since all threshold conditions are still satisfied when the last two events appear (i.e., the last two events are no longer silently consumed). Also, with multact=yes the operation will employ sliding window based event processing even after the write action has been executed at 12:03:27 (therefore, the operation will run until 12:04:59). If the rule definition has an optional event group pattern and its type defined with the egpattern and egptype fields, the event group pattern is used for matching the event group string. The event group string consists of positive integers Xi that are separated by a single space character: "X1 X2 ... XM". M is the number of events a given event correlation operation has observed within its event correlation window. Also, if the i-th event that the event correlation operation has observed is an instance of event eK, then Xi = K. Event group string is built and matched with event group pattern after all threshold conditions (given with thresh* fields) have been found satisfied. In other words, the event group pattern defines an additional condition to numeric threshold conditions. Note that the event group pattern and its type are similar to regular patterns and pattern types that are given with pattern* and ptype* fields, except the event group pattern is not setting any match variables. If the egptype field is set to RegExp or NRegExp, the egpattern field defines a regular expression, while in the case of SubStr and NSubStr egpattern provides a string pattern. If the egptype field is set to PerlFunc or NPerlFunc, event group string is the only parameter for the Perl function given with the egpattern field, and the function is called in the Perl scalar context. With egptype=PerlFunc, event group pattern matches if the return value of the function evaluates true in the Perl boolean context, while in the case of false the pattern does not match the event group string. With egptype=NPerlFunc, the pattern matching works in the opposite way. For example, consider the following rule: type=EventGroup2 ptype=SubStr pattern=EVENT_A thresh=2 ptype2=SubStr pattern2=EVENT_B thresh2=2 desc=Sequence of two or more As and Bs with 'A B' at the end action=write - %s egptype=RegExp egpattern=1 2$ window=60 Also, suppose the following events occur, and each event timestamp reflects the time SEC observes the event: Mar 10 12:05:31 EVENT_B Mar 10 12:05:32 EVENT_B Mar 10 12:05:38 EVENT_A Mar 10 12:05:39 EVENT_A Mar 10 12:05:42 EVENT_B When these events are observed by the above EventGroup2 rule, the rule starts an event correlation operation at 12:05:31. When the fourth event appears at 12:05:39, all threshold conditions (thresh=2 and thresh2=2) become satisfied, and therefore the following event group string is built from the first four events: 2 2 1 1 However, since this string does not match the regular expression 1 2$ that has been given with the egpattern field, the operation will not execute the action list given with the action field. When the fifth event appears at 12:05:42, all threshold conditions are again satisfied, and all observed events produce the following event group string: 2 2 1 1 2 Since this event group string matches the regular expression given with the egpattern field, the operation will write the string "Sequence of two or more As and Bs with 'A B' at the end" to standard output with the write action. If the rule definition has an optional action list defined with the countK field for event eK, the operation executes it every time an instance of eK is observed (even if multact is set to No and the operation has already executed the action list given with action). If the action list contains match variables, they are substituted before *each* execution with values from matching the current instance of eK. If the rule definition has an optional action list defined with the init field, the operation executes it immediately after the operation has been created. If the rule definition has an optional action list defined with the end field, the operation executes it immediately before the operation finishes. Note that this action list is *not* executed when the operation is terminated with the reset action. If the rule definition has an optional action list defined with the slide field, the operation executes it immediately after the event correlation window has slidden forward. However, note that moving the window with the setwpos action will *not* trigger the execution. Examples: The following example rule cross-correlates iptables events, Apache web server access log messages with 4xx response codes, and SSH login failure events: type=EventGroup3 ptype=RegExp pattern=sshd\[\d+\]: Failed .+ for (?:invalid user )?\S+ from ([\d.]+) port \d+ ssh2 thresh=2 ptype2=RegExp pattern2=^([\d.]+) \S+ \S+ \[.+?\] ".+? HTTP\/[\d.]+" 4\d+ thresh2=3 ptype3=RegExp pattern3=kernel: iptables:.* SRC=([\d.]+) thresh3=5 desc=Repeated probing from host $1 action=pipe '%t: %s' /bin/mail root@localhost window=120 The rule starts an event correlation operation for an IP address if SSH login failure event, iptables event, or Apache 4xx event is observed for that IP address. The operation sends an e-mail warning to root@localhost if within 120 seconds three threshold conditions are satisfied for the IP address it tracks -- (1) at least two SSH login failure events have occurred for this client IP, (2) at least three Apache 4xx events have occurred for this client IP, (3) at least five iptables events have been observed for this source IP. Suppose the following events occur, and each event timestamp reflects the time SEC observes the event: 192.168.1.104 - - [05/Jan/2014:01:11:22 +0200] "GET /test.html HTTP/1.1" 404 286 "-" "Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:26.0) Gecko/20100101 Firefox/26.0" Jan 5 01:12:52 localhost kernel: iptables: IN=eth0 OUT= MAC=08:00:27:8e:a1:3a:00:1d:e0:7e:89:b1:08:00 SRC=192.168.1.104 DST=192.168.1.107 LEN=60 TOS=0x10 PREC=0x00 TTL=64 ID=48422 DF PROTO=TCP SPT=46351 DPT=21 WINDOW=29200 RES=0x00 SYN URGP=0 Jan 5 01:12:53 localhost kernel: iptables: IN=eth0 OUT= MAC=08:00:27:8e:a1:3a:00:1d:e0:7e:89:b1:08:00 SRC=192.168.1.104 DST=192.168.1.107 LEN=60 TOS=0x10 PREC=0x00 TTL=64 ID=48423 DF PROTO=TCP SPT=46351 DPT=21 WINDOW=29200 RES=0x00 SYN URGP=0 Jan 5 01:13:01 localhost kernel: iptables: IN=eth0 OUT= MAC=08:00:27:8e:a1:3a:00:1d:e0:7e:89:b1:08:00 SRC=192.168.1.104 DST=192.168.1.107 LEN=60 TOS=0x10 PREC=0x00 TTL=64 ID=20048 DF PROTO=TCP SPT=44963 DPT=23 WINDOW=29200 RES=0x00 SYN URGP=0 Jan 5 01:13:02 localhost kernel: iptables: IN=eth0 OUT= MAC=08:00:27:8e:a1:3a:00:1d:e0:7e:89:b1:08:00 SRC=192.168.1.104 DST=192.168.1.107 LEN=60 TOS=0x10 PREC=0x00 TTL=64 ID=20049 DF PROTO=TCP SPT=44963 DPT=23 WINDOW=29200 RES=0x00 SYN URGP=0 Jan 5 01:13:08 localhost kernel: iptables: IN=eth0 OUT= MAC=08:00:27:8e:a1:3a:00:1d:e0:7e:89:b1:08:00 SRC=192.168.1.104 DST=192.168.1.107 LEN=60 TOS=0x10 PREC=0x00 TTL=64 ID=36362 DF PROTO=TCP SPT=56918 DPT=25 WINDOW=29200 RES=0x00 SYN URGP=0 Jan 5 01:13:09 localhost kernel: iptables: IN=eth0 OUT= MAC=08:00:27:8e:a1:3a:00:1d:e0:7e:89:b1:08:00 SRC=192.168.1.104 DST=192.168.1.107 LEN=60 TOS=0x10 PREC=0x00 TTL=64 ID=36363 DF PROTO=TCP SPT=56918 DPT=25 WINDOW=29200 RES=0x00 SYN URGP=0 192.168.1.104 - - [05/Jan/2014:01:13:51 +0200] "GET /test.html HTTP/1.1" 404 286 "-" "Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:26.0) Gecko/20100101 Firefox/26.0" 192.168.1.104 - - [05/Jan/2014:01:13:54 +0200] "GET /test.html HTTP/1.1" 404 286 "-" "Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:26.0) Gecko/20100101 Firefox/26.0" 192.168.1.104 - - [05/Jan/2014:01:14:00 +0200] "GET /login.html HTTP/1.1" 404 287 "-" "Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:26.0) Gecko/20100101 Firefox/26.0" 192.168.1.104 - - [05/Jan/2014:01:14:03 +0200] "GET /login.html HTTP/1.1" 404 287 "-" "Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:26.0) Gecko/20100101 Firefox/26.0" 192.168.1.104 - - [05/Jan/2014:01:14:03 +0200] "GET /login.html HTTP/1.1" 404 287 "-" "Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:26.0) Gecko/20100101 Firefox/26.0" Jan 5 01:14:11 localhost sshd[1810]: Failed password for root from 192.168.1.104 port 46125 ssh2 Jan 5 01:14:12 localhost sshd[1810]: Failed password for root from 192.168.1.104 port 46125 ssh2 Jan 5 01:14:18 localhost sshd[1822]: Failed password for root from 192.168.1.104 port 46126 ssh2 Jan 5 01:14:19 localhost sshd[1822]: Failed password for root from 192.168.1.104 port 46126 ssh2 192.168.1.104 - - [05/Jan/2014:01:14:34 +0200] "GET /test.html HTTP/1.1" 404 286 "-" "Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:26.0) Gecko/20100101 Firefox/26.0" The Apache 4xx event at 01:11:22 starts an event correlation operation for 192.168.1.104 which has the event correlation window of 120 seconds, thus ending at 01:13:22. Between 01:12:52 and 01:13:09, six iptables events appear for 192.168.1.104, and the appearance of the fifth event at 01:13:08 fulfills the third threshold condition (within 120 seconds, at least five iptables events have been observed). Since by 01:13:22 (the end of the event correlation window) no additional events have occurred, the first and second threshold condition remain unsatisfied. Therefore, the beginning of the event correlation window will be moved to 01:12:52 (the occurrence time of the earliest event which is at most 120 seconds old). As a result, the end of the window will move from 01:13:22 to 01:14:52. The only event which is left outside the window is the Apache 4xx event at 01:11:22, and thus the threshold condition for iptables events remains satisfied. Between 01:13:51 and 01:14:03, five Apache 4xx events occur, and the appearance of the third event at 01:14:00 fulfills the second threshold condition (within 120 seconds, at least three Apache 4xx events have been observed). These events are followed by four SSH login failure events which occur between 01:14:11 and 01:14:19. The appearance of the second event at 01:14:12 fulfills the first threshold condition (within 120 seconds, at least two SSH login failure events have been observed). Since at this particular moment (01:14:12) the other two conditions are also fulfilled, the operation sends an e-mail warning about 192.168.1.104 to root@localhost. After that, the operation silently consumes all further matching events for 192.168.1.104 until 01:14:52, and then terminates. Please note that if the above rule definition would contain multact=yes statement, the operation would continue sending e-mails at each matching event after 01:14:12, provided that all threshold conditions are satisfied. Therefore, the operation would send three additional e-mails at 01:14:18, 01:14:19, and 01:14:34. Also, the operation would not terminate after its window ends at 01:14:52, but would rather slide the window forward and expect new events. At the occurrence of any iptables, SSH login failure or Apache 4xx event for 192.168.1.104, the operation would produce a warning e-mail if all threshold conditions are fulfilled. The following example rule cross-correlates iptables events and SSH login events: type=EventGroup3 ptype=regexp pattern=sshd\[\d+\]: Failed .+ for (\S+) from ([\d.]+) port \d+ ssh2 varmap= user=1; ip=2 count=alias OPER_$+{ip} LOGIN_FAILED_$+{user}_$+{ip} ptype2=regexp pattern2=sshd\[\d+\]: Accepted .+ for (\S+) from ([\d.]+) port \d+ ssh2 varmap2= user=1; ip=2 context2=LOGIN_FAILED_$+{user}_$+{ip} ptype3=regexp pattern3=kernel: iptables:.* SRC=([\d.]+) varmap3= ip=1 desc=Client $+{ip} accessed a firewalled port and had difficulties with logging in action=pipe '%t: %s' /bin/mail root@localhost init=create OPER_$+{ip} slide=delete OPER_$+{ip}; reset 0 end=delete OPER_$+{ip} window=120 The rule starts an event correlation operation for an IP address if SSH login failure or iptables event was observed for that IP address. The operation exists for 120 seconds (since when the event correlation window slides forward, the operation terminates itself with the reset action as specified with the slide field). The operation sends an e-mail warning to root@localhost if within 120 seconds three threshold conditions are satisfied for the IP address it tracks -- (1) at least one iptables event has been observed for this source IP, (2) at least one SSH login failure has been observed for this client IP, (3) at least one successful SSH login has been observed for this client IP and for some user, provided that the operation has previously observed an SSH login failure for the same user and same client IP. Suppose the following events occur, and each event timestamp reflects the time SEC observes the event: Dec 27 19:00:06 test kernel: iptables: IN=eth0 OUT= MAC=00:13:72:8a:83:d2:00:1b:25:07:e2:1b:08:00 SRC=10.1.2.7 DST=10.2.5.5 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1881 DF PROTO=TCP SPT=34342 DPT=23 WINDOW=5840 RES=0x00 SYN URGP=0 Dec 27 19:00:14 test sshd[10520]: Accepted password for root from 10.1.2.7 port 52609 ssh2 Dec 27 19:00:24 test sshd[10526]: Failed password for risto from 10.1.2.7 port 52622 ssh2 Dec 27 19:00:27 test sshd[10526]: Accepted password for risto from 10.1.2.7 port 52622 ssh2 The iptables event at 19:00:06 starts an event correlation operation for 10.1.2.7 which has the event correlation window of 120 seconds. Immediately after the operation has been started, it creates the context OPER_10.1.2.7. The second event at 19:00:14 does not match the rule, since the context LOGIN_FAILED_root_10.1.2.7 does not exist. The third event at 19:00:24 matches the rule, and the operation which is running for 10.1.2.7 sets up the alias name LOGIN_FAILED_risto_10.1.2.7 for the context OPER_10.1.2.7. Finally, the fourth event at 19:00:27 matches the rule, since the context LOGIN_FAILED_risto_10.1.2.7 exists, and the event is therefore processed by the operation (the presence of the context indicates that the operation has previously observed a login failure for user risto from 10.1.2.7). At this particular moment (19:00:27), all three threshold conditions for the operation are fulfilled, and therefore it sends an e-mail warning about 10.1.2.7 to root@localhost. After that, the operation silently consumes all further matching events for 10.1.2.7 until 19:02:06, and then terminates. Immediately before termination, the operation deletes the context OPER_10.1.2.7 which also drops its alias name LOGIN_FAILED_risto_10.1.2.7. SUPPRESS RULE The Suppress rule supports the following fields. Note that match variables may be used in the context field. type fixed to Suppress (value is case insensitive). ptype pattern type (value is case insensitive). pattern pattern. varmap (optional) variable map. context (optional) context expression. desc (optional) string for describing the rule. rem (optional, may appear more than once) remarks and comments. The Suppress rule takes no action when an event has matched the rule, and keeps matching events from being processed by later rules in the configuration file. Note that the Suppress rule does not start event correlation operations, and the optional desc field is merely used for describing the rule. Also, in order to end event processing, so that no further rules from any of the configuration files would be tried, use the Jump rule. Examples: type=Suppress ptype=RegExp pattern=sshd\[\d+\]: Failed .+ for \S+ from ([\d.]+) port \d+ ssh2 context=SUPPRESS_IP_$1 type=SingleWithThreshold ptype=RegExp pattern=sshd\[\d+\]: Failed .+ for (\S+) from ([\d.]+) port \d+ ssh2 desc=Three SSH login failures within 1m for user $1 from $2 action=pipe '%t: %s' /bin/mail root@localhost; \ create SUPPRESS_IP_$2 3600 window=60 thresh=3 The first rule filters out SSH login failure events for an already reported source IP address, so that they will not be matched against the second rule during 3600 seconds after sending an e-mail warning. CALENDAR RULE The Calendar rule supports the following fields. type fixed to Calendar (value is case insensitive). time time specification. context (optional) context expression. desc operation description string. action action list. rem (optional, may appear more than once) remarks and comments. The Calendar rule was designed for executing actions at specific times. Unlike all other rules, this rule reacts only to the system clock, ignoring other input. The Calendar rule executes the action list given with the action field if the current time matches all conditions of the time specification given with the time field. The action list is executed only once for any matching minute. The rule employs a time specification which closely resembles the crontab(1) style, but there are some subtle differences. The time specification consists of five or six conditions separated by whitespace. The first condition matches minutes (allowed values are 0-59), the second condition matches hours (allowed values are 0-23), the third condition days (allowed values are 0-31, with 0 denoting the last day of the month), the fourth condition months (allowed values are 1-12), and the fifth condition weekdays (allowed values are 0-7, with 0 and 7 denoting Sunday). The sixth condition is optional and matches years (allowed values are 0-99 which denote the last two digits of the year). Asterisks (*), ranges of numbers (e.g., 8-11), and lists (e.g., 2,5,7-9) are allowed as conditions. Asterisks and ranges may be augmented with step values (e.g., 47-55/2 means 47,49,51,53,55). Note that unlike crontab(1) time specification, the day and weekday conditions are *not* joined with logical OR, but rather with logical AND. Therefore, 0 1 25-31 10 7 means 1AM on last Sunday in October. On the other hand, with crontab(1) the same specification means 1AM in every last seven days or every Sunday in October. Also, unlike some versions of cron(8), SEC is not restricted to take action only during the first second of the current minute. For example, if SEC is started at the 22th second of a minute, the wildcard condition produces a match for this minute. As another example, if the time specification matches the current minute but the context expression evaluates FALSE during the first half of the minute, the Calendar rule will execute the action list in the middle of this minute when the expression value becomes TRUE. Note that the Calendar rule does not start event correlation operations, and the desc field is merely used for setting the %s action list variable. Examples: type=Calendar time=0 2 25-31 3,12 6 desc=Check if backup is done on last Saturday of Q1 and Q4 action=event WAITING_FOR_BACKUP type=Calendar time=0 2 24-30 6,9 6 desc=Check if backup is done on last Saturday of Q2 and Q3 action=event WAITING_FOR_BACKUP type=PairWithWindow ptype=SubStr pattern=WAITING_FOR_BACKUP desc=Quarterly backup not completed on time! action=pipe '%t: %s' /bin/mail root@localhost ptype2=SubStr pattern2=BACKUP READY desc2=Quarterly backup successfully completed action2=none window=1800 The first two rules create a synthetic event WAITING_FOR_BACKUP at 2AM on last Saturday of March, June, September and December. The third rule matches this event and starts an event correlation operation which waits for the BACKUP READY event for 1800 seconds. If this event has not arrived by 2:30AM, the operation sends an e-mail warning to root@localhost. JUMP RULE The Jump rule supports the following fields. Note that match variables may be used in the context field. They may also be used in the cfset field, provided that the constset field is set to No. type fixed to Jump (value is case insensitive). continue (optional) TakeNext, DontCont, EndMatch or GoTo <label> (apart from <label>, values are case insensitive). ptype pattern type (value is case insensitive). pattern pattern. varmap (optional) variable map. context (optional) context expression. cfset (optional) configuration file set names that are separated by whitespace. constset (optional) Yes or No (values are case insensitive, default is Yes). desc (optional) string for describing the rule. rem (optional, may appear more than once) remarks and comments. The Jump rule submits matching events to specific ruleset(s) for further processing. If the event matches the rule, SEC continues the search for matching rules in configuration file set(s) given with the cfset field. Rules from every file are tried in the order of their appearance in the file. Configuration file sets can be created from Options rules with the joincfset field, with each set containing at least one configuration file. If more that one set name is given with cfset, sets are processed from left to right; a matching rule in one set doesn't prevent SEC from processing the following sets. If the constset field is set to Yes, set names are assumed to be constants and will not be searched for match variables at runtime. If the cfset field is not present and the continue field is set to GoTo, the Jump rule can be used for skipping rules inside the current configuration file. If both cfset and continue are not present (or continue is set to DontCont), Jump is identical to Suppress rule. Finally, if cfset is not present and continue is set to EndMatch, processing of the matching event ends (i.e., no further rules from any of the configuration files will be tried). Note that the Jump rule does not start event correlation operations, and the optional desc field is merely used for describing the rule. Examples: type=Jump ptype=RegExp pattern=sshd\[\d+\]: cfset=sshd-rules auth-rules When an sshd syslog message appears in input, rules from configuration files of the set sshd-rules are first used for matching the message, and then rules from the configuration file set auth-rules are tried. OPTIONS RULE The Options rule supports the following fields. type fixed to Options (value is case insensitive). joincfset (optional) configuration file set names that are separated by whitespace. procallin (optional) Yes or No (values are case insensitive, default is Yes). rem (optional, may appear more than once) remarks and comments. The Options rule sets processing options for the ruleset in the current configuration file. If more than one Options rule is present in the configuration file, the last instance overrides all previous ones. Note that the Options rule is only processed when SEC (re)starts and reads in the configuration file. Since this rule is not applied at runtime, it can never match events, react to the system clock, or start event correlation operations. The joincfset field lists the names of one or more configuration file sets, and the current configuration file will be added to each set. If a set doesn't exist, it will be created and the current configuration file becomes its first member. If the procallin field is set to No, the rules from the configuration file will be used for matching input from Jump rules only. Examples: The following rule adds the current configuration file to the set sshd-rules which is used for matching input from Jump rules only: type=Options joincfset=sshd-rules procallin=no The following rule adds the current configuration file to sets linux and solaris which are used for matching all input: type=Options joincfset=linux solaris
EVENT CORRELATION OPERATIONS
Event correlation operations are dynamic entities created by rules. After creating an operation, the rule also feeds the operation with events that need to be correlated. Since each rule can create and feed many operations which are running simultaneously, each operation needs a unique ID. In order to identify event correlation operations, SEC assigns an ID to every operation that is composed from the configuration file name, the rule number, and the operation description string (defined by the desc field of the rule). If there are N rules in the configuration file (excluding Options rules), the rule numbers belong to the range 0..N-1, and the number of the k-th rule is k-1. Since each Options rule is only processed when SEC reads in the configuration file and is not applied at runtime, the Options rules will not receive rule numbers. Note that since the configuration file name and rule number are part of the operation ID, different rules can have identical desc fields without a danger of a clash between operations. For example, if the configuration file /etc/sec/my.conf contains only one rule type=SingleWithThreshold ptype=RegExp pattern=user (\S+) login failure on (\S+) desc=Repeated login failures for user $1 on $2 action=pipe '%t: %s' /bin/mail root@localhost window=60 thresh=3 then the number of this rule is 0. When this rule matches an input event "user admin login failure on tty1", the desc field yields an operation description string Repeated login failures for user admin on tty1, and the event will be directed for further processing to the operation with the following ID: /etc/sec/my.conf | 0 | Repeated login failures for user admin on tty1 If the operation for this ID does not exist, the rule will create it. The newly created operation has its event counter initialized to 1, and it expects to receive two additional "user admin login failure on tty1" events from the rule within the following 60 seconds. If the operation receives such an event, its event counter is incremented, and if the counter reaches the value of 3, a warning e-mail is sent to root@localhost. By tuning the desc field of the rule, the scope of individual event correlation operations can be changed. For instance, if the following events occur within 10 seconds user admin login failure on tty1 user admin login failure on tty5 user admin login failure on tty2 the above rule starts three event correlation operations. However, if the desc field of the rule is changed to Repeated login failures for user $1, these events are processed by the *same* event correlation operation (the operation sends a warning e-mail to root@localhost when it receives the third event). Since rules from the same configuration file are matched against input in the order they are given, the rule ordering influences the creation and feeding of event correlation operations. Suppose the configuration file /etc/sec/my.conf contains the following rules: type=Suppress ptype=TValue pattern=TRUE context=MYCONTEXT type=SingleWithThreshold ptype=RegExp pattern=user (\S+) login failure on (\S+) desc=Repeated login failures for user $1 on $2 action=pipe '%t: %s' /bin/mail root@localhost window=60 thresh=3 The second rule is able to create and feed event correlation operations as long as the context MYCONTEXT does not exist. However, after MYCONTEXT has been created, no input event will reach the second rule, and the rule is thus unable to create new operations and feed existing ones with events. Note that Pair and PairWithWindow rules can feed the same event to several operations. Suppose the configuration file /etc/sec/my2.conf contains the following rules: type=Suppress ptype=SubStr pattern=test type=Pair ptype=RegExp pattern=database (\S+) down desc=Database $1 is down action=pipe '%t: %s' /bin/mail root@localhost ptype2=RegExp pattern2=database $1 up|all databases up desc2=Database %1 is up action2=pipe '%t: %s' /bin/mail root@localhost window=86400 Since the following input events don't contain the substring "test" database mydb1 down database mydb2 down database mydb3 down they are matched by the second rule of type Pair which creates three event correlation operations. Each operation is running for one particular database name, and the operations have the following IDs: /etc/sec/my2.conf | 1 | Database mydb1 is down /etc/sec/my2.conf | 1 | Database mydb2 is down /etc/sec/my2.conf | 1 | Database mydb3 is down Each newly created operation sends an e-mail notification to root@localhost about the "database down" condition, and will then wait for 86400 seconds (24 hours) for either of the following messages: (a) "database up" message for the given database, (b) "all databases up" message. The operation with the ID /etc/sec/my2.conf | 1 | Database mydb1 is down uses the following regular expression for matching expected messages: database mydb1 up|all databases up The operation with the ID /etc/sec/my2.conf | 1 | Database mydb2 is down employs the following regular expression for matching expected messages: database mydb2 up|all databases up Finally, the operation with the ID /etc/sec/my2.conf | 1 | Database mydb3 is down uses the following regular expression: database mydb3 up|all databases up If the following input events appear after 10 minutes database test up admin logged in database mydb3 up all databases up the first event "database test up" matches the first rule (Suppress) which does not pass the event further to the second rule (Pair). However, all following events reach the Pair rule. Since the messages don't match the pattern field of the rule, the rule feeds them to all currently existing operations it has created, so that the operations can match these events with their regular expressions. Because regular expressions of all three operations don't match the event "admin logged in", the operations will continue to run. In the case of the "database mydb3 up" event, the regular expression of the operation /etc/sec/my2.conf | 1 | Database mydb3 is down produces a match. Therefore, the operation will send the e-mail notification "Database mydb3 is up" to root@localhost and terminate. However, the following event "all databases up" matches the regular expressions of two remaining operations. As a result, the operations will send e-mail notifications "Database mydb1 is up" and "Database mydb2 is up" to root@localhost and terminate. Each operation has an event correlation window which defines its scope in time. The size of the window is defined by the window* field, and the beginning of the window can be obtained with the getwpos action. SingleWithThreshold, SingleWith2Thresholds and EventGroup operations can slide its window forward during event processing, while for all operations the window can also be moved explicitly with the setwpos action. Also, with the reset action event correlation operations can be terminated. Note that getwpos, setwpos, and reset actions only work for operations started by the rules from the same configuration file. For example, consider the configuration file /etc/sec/sshd.rules that contains the following rules: type=SingleWithThreshold ptype=RegExp pattern=sshd\[\d+\]: Failed .+ for (\S+) from [\d.]+ port \d+ ssh2 desc=Three SSH login failures within 1m for user $1 action=pipe '%t: %s' /bin/mail root@localhost window=60 thresh=3 type=Single ptype=RegExp pattern=sshd\[\d+\]: Accepted .+ for (\S+) from [\d.]+ port \d+ ssh2 desc=SSH login successful for user $1 action=reset -1 Three SSH login failures within 1m for user $1 Suppose the following events are generated by an SSH daemon, and each event timestamp reflects the time SEC observes the event: Dec 29 15:00:03 test sshd[14129]: Failed password for risto from 10.1.2.7 port 31312 ssh2 Dec 29 15:00:08 test sshd[14129]: Failed password for risto from 10.1.2.7 port 31312 ssh2 Dec 29 15:00:17 test sshd[14129]: Accepted password for risto from 10.1.2.7 port 31312 ssh2 Dec 29 15:00:52 test sshd[14142]: Failed password for risto from 10.1.1.2 port 17721 ssh2 The first event at 15:00:03 starts an event correlation operation with the ID /etc/sec/sshd.rules | 0 | Three SSH login failures within 1m for user risto However, when the third event occurs at 15:00:17, the second rule matches it and terminates the operation with the action reset -1 Three SSH login failures within 1m for user risto The -1 parameter of reset restricts the action to operations started by the previous rule (i.e., the first rule that has a number 0), while the Three SSH login failures within 1m for user risto parameter refers to the operation description string. Together with the current configuration file name (/etc/sec/sshd.rules), the parameters yield the operation ID /etc/sec/sshd.rules | 0 | Three SSH login failures within 1m for user risto (If the operation with the given ID would not exist, reset would perform no operation.) As a consequence, the fourth event at 15:00:52 starts another operation with the same ID as the terminated operation had. Without the second rule, the operation that was started at 15:00:03 would not be terminated, and the appearance of the fourth event would trigger a warning e-mail from that operation.
INPUT PROCESSING AND TIMING
SEC processes input data iteratively by reading one line at each iteration, writing this line into a relevant input buffer, and matching the content of the updated buffer with rules from configuration files. If during the matching process an action list is executed which creates new input events (e.g., through the event action), they are *not* written to buffer(s) immediately, but rather consumed at following iterations. Note that when both synthetic events and regular input are available for processing, synthetic events are always consumed first. When all synthetic events have been consumed iteratively, SEC will start processing new data from input files. With the --jointbuf option, SEC employs a joint input buffer for all input sources which holds N last input lines (the value of N can be set with the --bufsize option). Updating the input buffer means that the new line becomes the first element of the buffer, while the last element (the oldest line) is removed from the end of the buffer. With the --nojointbuf option, SEC maintains a buffer of N lines for each input file, and if the input line comes from file F, the buffer of F is updated as described previously. There is also a separate buffer for synthetic and internal events. Suppose SEC is started with the following command line /usr/bin/sec --conf=/etc/sec/test-multiline.conf --jointbuf \ --input=/var/log/prog1.log --input=/var/log/prog2.log and the configuration file /etc/sec/test-multiline.conf has the following content: type=Single rem=this rule matches two consecutive lines where the first \ line contains "test1" and the second line "test2", and \ writes the matching lines to standard output ptype=RegExp2 pattern=test1.*\n.*test2 desc=two consecutive test lines action=write - $0 When the following lines appear in input files /var/log/prog1.log and /var/log/prog2.log Dec 31 12:33:12 test prog1: test1 (file /var/log/prog1.log) Dec 31 12:34:09 test prog2: test1 (file /var/log/prog2.log) Dec 31 12:39:35 test prog1: test2 (file /var/log/prog1.log) Dec 31 12:41:53 test prog2: test2 (file /var/log/prog2.log) they are stored in a common input buffer. Therefore, rule fires after the third event has appeared, and writes the following lines to standard output: Dec 31 12:34:09 test prog2: test1 (file /var/log/prog2.log) Dec 31 12:39:35 test prog1: test2 (file /var/log/prog1.log) However, if SEC is started with the --nojointbuf option, separate input buffers are set up for /var/log/prog1.log and /var/log/prog2.log. Therefore, the rule fires after the third event has occurred, and writes the following lines to standard output: Dec 31 12:33:12 test prog1: test1 (file /var/log/prog1.log) Dec 31 12:39:35 test prog1: test2 (file /var/log/prog1.log) The rule also fires after the fourth event has occurred, producing the following output: Dec 31 12:34:09 test prog2: test1 (file /var/log/prog2.log) Dec 31 12:41:53 test prog2: test2 (file /var/log/prog2.log) The content of input buffers can be modified with the rewrite action, and modifications become visible immediately during ongoing event processing iteration. Suppose SEC is started with the following command line /usr/bin/sec --conf=/etc/sec/test-rewrite.conf \ --input=- --nojointbuf and the configuration file /etc/sec/test-rewrite.conf has the following content: type=Single rem=this rule matches two consecutive lines where the first \ line contains "test1" and the second line "test2", and \ joins these lines in the input buffer ptype=RegExp2 pattern=^(.*test1.*)\n(.*test2.*)$ continue=TakeNext desc=join two test lines action=rewrite 2 Joined $1 and $2 type=Single rem=this rule matches a line which begins with "Joined", \ and writes this line to standard output ptype=RegExp pattern=^Joined desc=output joined lines action=write - $0 When the following two lines appear in standard input This is a test1 This is a test2 they are matched by the first rule which uses the rewrite action for replacing those two lines in the input buffer with a new content. The last line in the input buffer ("This is a test2") is replaced with "Joined This is a test1 and This is a test2", while the previous line in the input buffer ("This is a test1") is replaced with an empty string. Since the rule contains continue=TakeNext statement, the matching process will continue from the following rule. This rule matches the last line in the input buffer if it begins with "Joined", and writes the line to standard output, producing Joined This is a test1 and This is a test2 After each event processing iteration, the pattern match cache is cleared. In other words, if a match is cached with the rule varmap* field, it is available during ongoing iteration only. Note that results from a successful pattern matching are also cached when the subsequent context expression evaluation yields FALSE. This allows for reusing results from partial rule matches. For example, the following rule creates the cache entry "ssh_failed_login" for any SSH failed login event, even if the context ALERTING_ON does not exist: type=Single ptype=RegExp pattern=sshd\[\d+\]: Failed .+ for (\S+) from ([\d.]+) port \d+ ssh2 varmap=ssh_failed_login; user=1; ip=2 context=ALERTING_ON desc=SSH login failure for user $1 from $2 action=pipe '%s' /bin/mail -s 'SSH login alert' root@localhost However, provided the context expression does not contain match variables, enclosing the expression in square brackets (e.g., [ALERTING_ON]) forces its evaluation before the pattern matching, and will thus prevent the matching and the creation of the cache entry if the evaluation yields FALSE. Rules from the same configuration file are matched against the buffer content in the order they are given in that file. When multiple configuration files have been specified, rule sequences from all files are matched against the buffer content (unless specified otherwise with Options rules). The matching order is determined by the order of configuration files in SEC command line. For example, if the Perl glob() function returns filenames in ascending ASCII order, and configuration files /home/risto/A.conf, /home/risto/B.conf2, and /home/risto/C.conf are specified with --conf=/home/risto/*.conf --conf=/home/risto/*.conf2 in SEC command line, then SEC first matches the input against the rule sequence from A.conf, then from C.conf, and finally from B.conf2. Also, note that even if A.conf contains a Suppress rule for a particular event, the event is still processed by rulesets in C.conf and B.conf2. However, note that glob() might return file names in different order if locale settings change. If you want to enforce a fixed order for configuration file application in a portable way, it is recommended to create a unique set for each file with the Options rule, and employ the Jump rule for defining the processing order for sets, e.g.: # This rule appears in A.conf type=Options joincfset=FileA procallin=no # This rule appears in B.conf2 type=Options joincfset=FileB procallin=no # This rule appears in C.conf type=Options joincfset=FileC procallin=no # This rule appears in main.conf type=Jump ptype=TValue pattern=TRUE cfset=FileA FileC FileB After the relevant input buffer has been updated and its content has been matched by the rules, SEC handles caught signals and checks the status of child processes. When the timeout specified with the --cleantime option has expired, SEC also checks the status of contexts and event correlation operations. Therefore, relatively small values should be specified with the --cleantime option, in order to retain the accuracy of the event correlation process. If the --cleantime option is set to 0, SEC checks event correlation operations and contexts after processing every input line, but this consumes more CPU time. If the --poll-timeout option value exceeds the value given with --cleantime, the --poll-timeout option value takes precedence (i.e., sleeps after unsuccessful polls will not be shortened). Finally, note that apart from the sleeps after unsuccessful polls, SEC measures all time intervals and occurrence times in seconds, and always uses the time(2) system call for obtaining the current time. Also, for input event occurrence time SEC always uses the time it observed the event, *not* the timestamp extracted from the event.
INTERNAL EVENTS AND CONTEXTS
In the action list of a context, the context can also be referred with the internal context name _THIS. The name _THIS is created and deleted dynamically by SEC and it points to the context only during its action list execution. This feature is useful when the context has had several names during its lifetime (created with the alias action), and it is hard to determine which names exist when the context expires. For example, if the context is created with create A 60 (report A /bin/mail root) which is immediately followed by alias A B and unalias A, the report action will fail since the name A no longer refers to the context. However, replacing the first action with create A 60 (report _THIS /bin/mail root) will produce the correct result. If the --intevents command line option is given, SEC will generate internal events when it is started up, when it receives certain signals, and when it terminates normally. Inside SEC, internal event is treated as if it was a line that was read from a SEC input file. Specific rules can be written to match internal events, in order to take some action (e.g., start an external event correlation module with spawn when SEC starts up). The following internal events are supported: SEC_STARTUP - generated when SEC is started (this event will always be the first event that SEC sees) SEC_PRE_RESTART - generated before processing of the SIGHUP signal (this event will be the last event that SEC sees before clearing all internal data structures and reloading its configuration) SEC_RESTART - generated after processing of the SIGHUP signal (this event will be the first event that SEC sees after clearing all internal data structures and reloading its configuration) SEC_PRE_SOFTRESTART - generated before processing of the SIGABRT signal (this event will be the last event that SEC sees before reloading its configuration) SEC_SOFTRESTART - generated after processing of the SIGABRT signal (this event will be the first event that SEC sees after reloading its configuration) SEC_PRE_LOGROTATE - generated before processing of the SIGUSR2 signal (this event will be the last event that SEC sees before reopening its log file and closing its outputs) SEC_LOGROTATE - generated after processing of the SIGUSR2 signal (this event will be the first event that SEC sees after reopening its log file and closing its outputs) SEC_SHUTDOWN - generated when SEC receives the SIGTERM signal, or when SEC reaches all EOFs of input files after being started with the --notail option. With the --childterm option, SEC sleeps for 3 seconds after generating SEC_SHUTDOWN event, and then sends SIGTERM to its child processes (if a child process was triggered by SEC_SHUTDOWN, this delay leaves the process enough time for setting a signal handler for SIGTERM). Before generating an internal event, SEC sets up a context named SEC_INTERNAL_EVENT, in order to disambiguate internal events from regular input. The SEC_INTERNAL_EVENT context is deleted immediately after the internal event has been matched against all rules. If the --intcontexts command line option is given, or there is an --input option with a context specified, SEC creates an internal context each time it reads a line from an input file or a synthetic event. The internal context is deleted immediately after the line has been matched against all rules. For all input files that have the context name explicitly set with --input=<file_pattern>=<context>, the name of the internal context is <context>. If the line was read from the input file <filename> for which there is no context name set, the name of the internal context is _FILE_EVENT_<filename>. For synthetic events, the name of the internal context defaults to _INTERNAL_EVENT, but cspawn and cevent actions can be used for generating synthetic events with custom internal context names. This allows for writing rules that match data from one particular input source only. For example, the rule type=Suppress ptype=TValue pattern=TRUE context=[!_FILE_EVENT_/dev/logpipe] passes only the lines that were read from /dev/logpipe, and also synthetic events that were generated with the _FILE_EVENT_/dev/logpipe internal context (e.g., with the action cevent _FILE_EVENT_/dev/logpipe 0 This is a test event). As another example, if SEC has been started with the command line /usr/bin/sec --intevents --intcontexts --conf=/etc/sec/my.conf \ --input=/var/log/messages=MESSAGES \ --input=/var/log/secure=SECURE \ --input=/var/log/cron=CRON and the rule file /etc/sec/my.conf contains the following rules type=Single ptype=RegExp pattern=^(?:SEC_STARTUP|SEC_RESTART)$ context=[SEC_INTERNAL_EVENT] desc=listen on 10514/tcp for incoming events action=cspawn MESSAGES /usr/bin/nc -l -k 10514 type=Single ptype=RegExp pattern=. context=[MESSAGES] desc=echo everything from 10514/tcp and /var/log/messages action=write - $0 then SEC will receive input lines from the log files /var/log/messages, /var/log/secure, and /var/log/cron, and will also run /usr/bin/nc for receiving input lines from the port 10514/tcp. All input lines from /var/log/messages and 10514/tcp are matched by the second rule and written to standard output.
CHILD PROCESSES
The SingleWithScript rule and shellcmd, spawn, cspawn, cmdexec, spawnexec, cspawnexec, pipe, pipeexec, report, and reportexec actions fork a child process for executing an external program. For the SingleWithScript rule with shell=yes setting and for shellcmd, spawn, cspawn, pipe, and report actions, the following rule applies -- if the program command line contains shell metacharacters, the command line is first parsed by the shell which then starts the program. For the SingleWithScript rule with shell=no setting and for cmdexec, spawnexec, cspawnexec, pipeexec, and reportexec actions, the program command line is not parsed by shell, even if shell metacharacters are present in the command line. Disabling shell parsing for command lines can be useful for avoiding unwanted side effects. For example, consider the following badly written rule for sending an e-mail to a local user if 10 SSH login failures have been observed for this user from the same IP address during 300 seconds: type=SingleWithThreshold ptype=RegExp pattern=sshd\[\d+\]: Failed .+ for (.+) from ([\d.]+) port \d+ ssh2 desc=Failed SSH logins for user $1 from $2 action=pipe 'Failed SSH logins from $2' /bin/mail -s alert $1 window=300 thresh=10 Unfortunately, the above rule allows for the execution of arbitrary command lines with the privileges of the SEC process. For example, consider the following malicious command line for providing fake input events for the rule: logger -p authpriv.info -t sshd -i 'Failed password for `/usr/bin/touch /tmp/test` from 127.0.0.1 port 12345 ssh2' When this command line is repeatedly executed, the attacker is able to trigger the execution of the command line /bin/mail -s alert `/usr/bin/touch /tmp/test`. However, this command line is parsed by shell that triggers the execution of the command line specified by the attacker: /usr/bin/touch /tmp/test. For fixing this issue, the pipe action can be replaced with pipeexec which will disable the shell parsing: action=pipeexec 'Failed SSH logins from $2' /bin/mail -s alert $1 As another workaround, the regular expression pattern of the rule can be modified to match user names that do not contain shell metacharacters, for example: pattern=sshd\[\d+\]: Failed .+ for ([\w.-]+) from ([\d.]+) port \d+ ssh2 SEC communicates with its child processes through pipes (created with the pipe(2) system call). When the child process is at the read end of the pipe, data have to be written to the pipe in blocking mode which ensures reliable data transmission. In order to avoid being blocked, SEC forks another SEC process for writing data to the pipe reliably. The newly created SEC process will then fork the child process, managing the child process on behalf of the main SEC process (i.e., the main SEC process is the grandparent process for the child). For example, if the SEC process that manages the child receives the SIGTERM signal, the signal will be forwarded to the child process, and when the child process terminates, its exit code will be reported to the main SEC process. After forking an external program, SEC continues immediately, and checks the program status periodically until the program exits. The running time of a child process is not limited in any way. With the --childterm option, SEC sends the SIGTERM signal to all child processes when it terminates. If some special exit procedures need to be accomplished in the child process (or the child wishes to ignore SIGTERM), then the child must install a handler for the SIGTERM signal. Note that if the program command line is parsed by shell, the parsing shell will run as a child process of SEC and the parent process of the program. Therefore, the SIGTERM signal will be sent to the shell, *not* the program. In order to avoid this, the shell's builtin exec command can be used (see sh(1) for more information) which replaces the shell with the program without forking a new process, e.g., action=spawn exec /usr/local/bin/myscript.pl 2>/var/log/myscript.log Note that if an action list includes two actions which fork external programs, the execution order these programs is not determined by the order of actions in the list, since both programs are running asynchronously. In order to address this issue, the execution order must be specified explicitly (e.g., instead of writing action=shellcmd cmd1; shellcmd cmd2, use the shell && operator and write action=shellcmd cmd1 && cmd2). Sometimes it is desirable to start an external program and provide it with data from several rules. In order to create such setup, named pipes can be harnessed. For example, if /var/log/pipe is a named pipe, then action=shellcmd /usr/bin/logger -f /var/log/pipe -p user.notice starts the /usr/bin/logger utility which sends all lines read from /var/log/pipe to the local syslog daemon with the "user" facility and "notice" level. In order to feed events to /usr/bin/logger, the write action can be used (e.g., write /var/log/pipe This is my event). Although SEC keeps the named pipe open across different write actions, the pipe will be closed on the reception of SIGHUP, SIGABRT and SIGUSR2 signals. Since many UNIX tools terminate on receiving EOF from standard input, they need restarting after such signals have arrived. For this purpose, the --intevents option and SEC internal events can be used. For example, the following rule starts the /usr/bin/logger utility at SEC startup, and also restarts it after the reception of relevant signals: type=Single ptype=RegExp pattern=^(?:SEC_STARTUP|SEC_RESTART|SEC_SOFTRESTART|SEC_LOGROTATE)$ context=SEC_INTERNAL_EVENT desc=start the logger tool action=free %emptystring; owritecl /var/log/pipe %emptystring; \ shellcmd /usr/bin/logger -f /var/log/pipe -p user.notice Note that if /var/log/pipe is never opened for writing by a write action, /usr/bin/logger will never see EOF and will thus not terminate. The owritecl action opens and closes /var/log/pipe without writing any bytes, in order to ensure the presence of EOF in such cases. This allows any previous /usr/bin/logger process to terminate before the new process is started.
PERL INTEGRATION
SEC supports patterns, context expressions, and actions which involve calls to the Perl eval() function or the execution of precompiled Perl code. The use of Perl code in SEC patterns and context expressions allows for creating proper match conditions for scenarios which can't be handled by a simple regular expression match. For example, consider the following iptables syslog events: May 27 10:00:15 box1 kernel: iptables: IN=eth0 OUT= MAC=08:00:27:be:9e:2f:00:10:db:ff:20:03:08:00 SRC=10.6.4.14 DST=10.1.8.2 LEN=84 TOS=0x00 PREC=0x00 TTL=251 ID=61426 PROTO=ICMP TYPE=8 CODE=0 ID=11670 SEQ=2 May 27 10:02:22 box1 kernel: iptables: IN=eth0 OUT= MAC=08:00:27:be:9e:2f:00:10:db:ff:20:03:08:00 SRC=10.6.4.14 DST=10.1.8.2 LEN=52 TOS=0x00 PREC=0x00 TTL=60 ID=61441 DF PROTO=TCP SPT=53125 DPT=23 WINDOW=49640 RES=0x00 SYN URGP=0 Depending on the protocol and the nature of the traffic, events can have a wide variety of fields, and parsing out all event data with one regular expression is infeasible. For addressing this issue, a PerlFunc pattern can be used which creates match variables from all fields of the matching event, stores them in one Perl hash, and returns a reference to this hash. Outside the PerlFunc pattern, match variables are initialized from the key- value pairs in the returned hash. Suppose the following Jump rule with a PerlFunc pattern is defined in the main.rules rule file: type=Jump ptype=PerlFunc pattern=sub { my(%var); my($line) = $_[0]; \ if ($line !~ /kernel: iptables:/g) { return 0; } \ while ($line =~ /\G\s*([A-Z]+)(?:=(\S*))?/g) { \ $var{$1} = defined($2)?$2:1; \ } return \%var; } varmap=IPTABLES desc=parse iptables event cfset=iptables For example, if the iptables event contains the fields SRC=10.6.4.14, DST=10.1.8.2 and SYN, the above PerlFunc pattern sets up match variable $+{SRC} which holds 10.6.4.14, match variable $+{DST} which holds 10.1.8.2, and match variable $+{SYN} which holds 1. The Jump rule caches all created match variables under the name IPTABLES, and submits the matching event to iptables ruleset for further processing. Suppose the iptables ruleset is defined in the iptables.rules rule file: type=Options procallin=no joincfset=iptables type=SingleWithThreshold ptype=Cached pattern=IPTABLES context=IPTABLES :> ( sub { return $_[0]->{"PROTO"} eq "ICMP"; } ) desc=ICMP flood type $+{TYPE} code $+{CODE} from host $+{SRC} action=logonly window=10 thresh=100 type=SingleWithThreshold ptype=Cached pattern=IPTABLES context=IPTABLES :> ( sub { return exists($_[0]->{"SYN"}) && \ exists($_[0]->{"FIN"}) ; } ) desc=SYN+FIN flood from host $+{SRC} action=logonly window=10 thresh=100 The two SingleWithThreshold rules employ Cached patterns for matching iptables events by looking up the IPTABLES entry in the pattern match cache (created by the above Jump rule for each iptables event). In order to narrow down the match to specific iptables events, the rules employ precompiled Perl functions in context expressions. The :> operator is used for speeding up the matching, providing the function with a single parameter which refers to the hash of variable name-value pairs for the IPTABLES cache entry. The first SingleWithThreshold rule logs a warning message if within 10 seconds 100 iptables events have been observed for ICMP packets with the same type, code, and source IP address. The second SingleWithThreshold rule logs a warning message if within 10 seconds 100 iptables events have been observed for TCP packets coming from the same host, and having both SYN and FIN flag set in each packet. Apart from using action list variables for data sharing between rules, Perl variables created in Perl code can be employed for the same purpose. For example, when SEC has executed the following action action=eval %a ($b = 1) the variable $b and its value become visible in the following context expression context= =(++$b > 10) (with that expression one can implement event counting implicitly). In order to avoid possible clashes with variables inside the SEC code itself, user-defined Perl code is executed in the main::SEC namespace (i.e., inside the special package main::SEC). By using the main:: prefix, SEC data structures can be accessed and modified. For example, the following rules restore and save contexts with names MY_* on SEC startup and shutdown, using Perl Storable module for saving and restoring relevant elements of %main::context_list hash (since the following example does not handle code references with Storable module, it is assumed that context action lists do not contain lcall actions): type=Single ptype=SubStr pattern=SEC_STARTUP context=SEC_INTERNAL_EVENT continue=TakeNext desc=Load the Storable module and terminate if it is not found action=eval %ret (require Storable); \ if %ret ( logonly Storable loaded ) else ( eval %o exit(1) ) type=Single ptype=SubStr pattern=SEC_STARTUP context=SEC_INTERNAL_EVENT desc=Restore contexts MY_* from /var/lib/sec/SEC_CONTEXTS on startup action=lcall %ret -> ( sub { my($ref, $context); \ $ref = Storable::retrieve("/var/lib/sec/SEC_CONTEXTS"); \ foreach $context (keys %{$ref}) { \ if ($context =~ /^MY_/) \ { $main::context_list{$context} = $ref->{$context}; } } } ) type=Single ptype=SubStr pattern=SEC_SHUTDOWN context=SEC_INTERNAL_EVENT desc=Save contexts MY_* into /var/lib/sec/SEC_CONTEXTS on shutdown action=lcall %ret -> ( sub { my($context, %hash); \ foreach $context (keys %main::context_list) { \ if ($context =~ /^MY_/) \ { $hash{$context} = $main::context_list{$context}; } } \ Storable::store(\%hash, "/var/lib/sec/SEC_CONTEXTS"); } ) However, note that modifying data structures within SEC code is recommended only for advanced users who have carefully studied relevant parts of the code. Finally, sometimes larger chunks of Perl code have to be used for event processing and correlation. However, writing many lines of code directly into a rule is cumbersome and may decrease its readability. In such cases it is recommended to separate the code into a custom Perl module which is loaded at SEC startup, and use the code through the module interface (see perlmod(1) for further details): type=Single ptype=SubStr pattern=SEC_STARTUP context=SEC_INTERNAL_EVENT desc=Load the SecStuff module action=eval %ret (require '/usr/local/sec/SecStuff.pm'); \ if %ret ( none ) else ( eval %o exit(1) ) type=Single ptype=PerlFunc pattern=sub { return SecStuff::my_match($_[0]); } desc=event '$0' was matched by my_match() action=write - %s
EXAMPLES
Example 1 - a ruleset for Cisco events This section presents an example rulebase for managing Cisco devices. It is assumed that the managed devices have syslog logging enabled, and that all syslog messages are sent to a central host and written to log file(s) that are monitored by SEC. # Set up contexts NIGHT and WEEKEND for nights # and weekends. The context NIGHT has a lifetime # of 8 hours and the context WEEKEND 2 days type=Calendar time=0 23 * * * desc=NIGHT action=create %s 28800 type=Calendar time=0 0 * * 6 desc=WEEKEND action=create %s 172800 # If a router does not come up within 5 minutes # after it was rebooted, generate event # "<router> REBOOT FAILURE". The next rule matches # this event, checks the router with ping and sends # a notification if there is no response. type=PairWithWindow ptype=RegExp pattern=\s([\w.-]+) \d+: %SYS-5-RELOAD desc=$1 REBOOT FAILURE action=event %s ptype2=RegExp pattern2=\s$1 \d+: %SYS-5-RESTART desc2=%1 successful reboot action2=logonly window=300 type=SingleWithScript ptype=RegExp pattern=^([\w.-]+) REBOOT FAILURE script=/bin/ping -c 3 -q $1 desc=$1 did not come up after reboot action=logonly $1 is pingable after reboot action2=pipe '%t: %s' /bin/mail root@localhost # Send a notification if CPU load of a router is too # high (two CPUHOG messages are received within 5 # minutes); send another notification if the load is # normal again (no CPUHOG messages within last 15 # minutes). Rule is not active at night or weekend. type=SingleWith2Thresholds ptype=RegExp pattern=\s([\w.-]+) \d+: %SYS-3-CPUHOG context=!(NIGHT || WEEKEND) desc=$1 CPU overload action=pipe '%t: %s' /bin/mail root@localhost window=300 thresh=2 desc2=$1 CPU load normal action2=pipe '%t: %s' /bin/mail root@localhost window2=900 thresh2=0 # If a router interface is in down state for less # than 15 seconds, generate event # "<router> INTERFACE <interface> SHORT OUTAGE"; # otherwise generate event # "<router> INTERFACE <interface> DOWN". type=PairWithWindow ptype=RegExp pattern=\s([\w.-]+) \d+: %LINK-3-UPDOWN: Interface ([\w.-]+), changed state to down desc=$1 INTERFACE $2 DOWN action=event %s ptype2=RegExp pattern2=\s$1 \d+: %LINK-3-UPDOWN: Interface $2, changed state to up desc2=%1 INTERFACE %2 SHORT OUTAGE action2=event %s window=15 # If "<router> INTERFACE <interface> DOWN" event is # received, send a notification and wait for # "interface up" event from the same router interface # for the next 24 hours type=Pair ptype=RegExp pattern=^([\w.-]+) INTERFACE ([\w.-]+) DOWN desc=$1 interface $2 is down action=pipe '%t: %s' /bin/mail root@localhost ptype2=RegExp pattern2=\s$1 \d+: %LINK-3-UPDOWN: Interface $2, changed state to up desc2=%1 interface %2 is up action2=pipe '%t: %s' /bin/mail root@localhost window=86400 # If ten "short outage" events have been observed # in the window of 6 hours, send a notification type=SingleWithThreshold ptype=RegExp pattern=^([\w.-]+) INTERFACE ([\w.-]+) SHORT OUTAGE desc=Interface $2 at node $1 is unstable action=pipe '%t: %s' /bin/mail root@localhost window=21600 thresh=10 Example 2 - hierarchically organized rulesets for iptables and sshd events This section presents an example of hierarchically organized rules for processing Linux iptables events from /var/log/messages and SSH login events from /var/log/secure. It is assumed that all rule files reside in the /etc/sec directory and that the rule hierarchy has two levels. The file /etc/sec/main.rules contains first-level Jump rules for matching and parsing events from input files and submitting them to proper rulesets for further processing. All other rule files in the /etc/sec directory contain second-level rules which receive their input from first-level Jump rules. Also, the example assumes that SEC is started with the following command line: /usr/bin/sec --conf=/etc/sec/*.rules --intcontexts \ --input=/var/log/messages --input=/var/log/secure # # the content of /etc/sec/main.rules # type=Jump context=[ _FILE_EVENT_/var/log/messages ] ptype=PerlFunc pattern=sub { my(%var); my($line) = $_[0]; \ if ($line !~ /kernel: iptables:/g) { return 0; } \ while ($line =~ /\G\s*([A-Z]+)(?:=(\S*))?/g) { \ $var{$1} = defined($2)?$2:1; \ } return \%var; } varmap=IPTABLES desc=parse iptables events and direct to relevant ruleset cfset=iptables type=Jump context=[ _FILE_EVENT_/var/log/secure ] ptype=RegExp pattern=sshd\[(?<pid>\d+)\]: (?<status>Accepted|Failed) \ (?<authmethod>[\w-]+) for (?<invuser>invalid user )?\ (?<user>[\w-]+) from (?<srcip>[\d.]+) port (?<srcport>\d+) ssh2$ varmap=SSH_LOGIN desc=parse SSH login events and direct to relevant ruleset cfset=ssh-login type=Jump context=[ SSH_EVENT ] ptype=TValue pattern=True desc=direct SSH synthetic events to relevant ruleset cfset=ssh-events # # the content of /etc/sec/iptables.rules # type=Options procallin=no joincfset=iptables type=SingleWithThreshold ptype=Cached pattern=IPTABLES context=IPTABLES :> ( sub { return exists($_[0]->{"SYN"}) && \ exists($_[0]->{"FIN"}) ; } ) \ && !SUPPRESS_IP_$+{SRC} desc=SYN+FIN flood from host $+{SRC} action=pipe '%t: %s' /bin/mail -s 'iptables alert' root@localhost; \ create SUPPRESS_IP_$+{SRC} 3600 window=10 thresh=100 type=SingleWithThreshold ptype=Cached pattern=IPTABLES context=IPTABLES :> ( sub { return exists($_[0]->{"SYN"}) && \ !exists($_[0]->{"ACK"}) ; } ) \ && !SUPPRESS_IP_$+{SRC} desc=SYN flood from host $+{SRC} action=pipe '%t: %s' /bin/mail -s 'iptables alert' root@localhost; \ create SUPPRESS_IP_$+{SRC} 3600 window=10 thresh=100 # # the content of /etc/sec/ssh-login.rules # type=Options procallin=no joincfset=ssh-login type=Single ptype=Cached pattern=SSH_LOGIN context=SSH_LOGIN :> ( sub { return $_[0]->{"status"} eq "Failed" && \ $_[0]->{"srcport"} < 1024 && \ defined($_[0]->{"invuser"}); } ) continue=TakeNext desc=Probe of invalid user $+{user} from privileged port of $+{srcip} action=pipe '%t: %s' /bin/mail -s 'SSH alert' root@localhost type=SingleWithThreshold ptype=Cached pattern=SSH_LOGIN context=SSH_LOGIN :> ( sub { return $_[0]->{"status"} eq "Failed" && \ defined($_[0]->{"invuser"}); } ) desc=Ten login probes for invalid users from $+{srcip} within 60s action=pipe '%t: %s' /bin/mail -s 'SSH alert' root@localhost thresh=10 window=60 type=PairWithWindow ptype=Cached pattern=SSH_LOGIN context=SSH_LOGIN :> ( sub { return $_[0]->{"status"} eq "Failed"; } ) desc=User $+{user} failed to log in from $+{srcip} within 60s action=cevent SSH_EVENT 0 %s ptype2=Cached pattern2=SSH_LOGIN context2=SSH_LOGIN :> \ ( sub { return $_[0]->{"status"} eq "Accepted"; } ) && \ $+{user} %+{user} $+{srcip} %+{srcip} -> \ ( sub { return $_[0] eq $_[1] && $_[2] eq $_[3]; } ) desc2=User $+{user} logged in successfully from $+{srcip} within 60s action2=logonly window=60 # # the content of /etc/sec/ssh-events.rules # type=Options procallin=no joincfset=ssh-events type=SingleWithThreshold ptype=RegExp pattern=User ([\w-]+) failed to log in from [\d.]+ within 60s desc=Ten login failures for user $1 within 1h action=pipe '%t: %s' /bin/mail -s 'SSH alert' root@localhost thresh=10 window=3600
ENVIRONMENT
If the SECRC environment variable is set, SEC expects it to contain the name of its resource file. Resource file lines which are empty or which begin with the number sign (#) are ignored (whitespace may precede #). Each remaining line is appended to the argv array of SEC as a *single* element. Also, the lines are appended to argv in the order they appear in the resource file. Therefore, if the SEC command line option has a value, the option name and the value must either be separated by the equal sign (=) or a newline. Here is a simple resource file example: # read events from standard input --input=- # rules are stored in /etc/sec/test.conf --conf /etc/sec/test.conf Note that although SEC rereads its resource file at the reception of the SIGHUP or SIGABRT signal, adding an option that specifies a certain startup procedure (e.g., --pid or --detach) will not produce the desired effect at runtime. Also note that the resource file content is *not* parsed by shell, therefore shell metacharacters are passed to SEC as-is.
SIGNALS
SIGHUP full restart -- SEC will reinterpret its command line and resource file options, reopen its log and input files, close its output files and sockets (these will be reopened on demand), reload its configuration, and drop *all* event correlation state (all event correlation operations will be terminated, all contexts will be deleted, all action list variables will be erased, etc.). With the --childterm option, SEC will also send the SIGTERM signal to its child processes. SIGABRT soft restart -- SEC will reinterpret its command line and resource file options, reopen its log file, and close its output files and sockets (these will be reopened on demand). If the --keepopen option is specified, previously opened input files will remain open across soft restart, otherwise all input files will be reopened. SEC will (re)load configuration from rule files which have been modified (file modification time returned by stat(2) has changed) or created after the previous configuration load. SEC will also terminate event correlation operations started from rule files that have been modified or removed after the previous configuration load. Other operations and previously loaded configuration from unmodified rule files will remain intact. Note that on some systems SIGIOT is used in place of SIGABRT. SIGUSR1 detailed information about the current state of SEC (performance and rule matching statistics, running event correlation operations, created contexts, etc.) will be written to the SEC dump file. SIGUSR2 SEC will reopen its log file (useful for log file rotation), and also close its output files and sockets which will be reopened on demand. SIGINT SEC will increase its logging level by one; if the current level is 6, the level will be set back to 1. Please note this feature is available only if SEC is running non-interactively (e.g., in daemon mode). SIGTERM SEC will terminate gracefully. With the --childterm option, all SEC child processes will receive SIGTERM.
BUGS
With some locale settings, single quotes (') in this man page might be displayed incorrectly. As a workaround, set the LANG environment variable to C when reading this man page (e.g., env LANG=C man sec).
AUTHOR
Risto Vaarandi (ristov at users d0t s0urcef0rge d0t net)
ACKNOWLEDGMENTS
The author is grateful to SEB Estonia for supporting this work. The author also thanks the following people for supplying software patches, documentation fixes, and suggesting new features: Al Sorrell, Brian Mielke, David Lang, James Brown, Jon Frazier, Mark D. Nagel, Peter Eckel, Rick Casey, and William Gertz. Last but not least, the author expresses his profound gratitude to John P. Rouillard for many great ideas and creative discussions that have helped to develop SEC.
SEE ALSO
cron(8), crontab(1), execvp(3), fork(2), mail(1), perl(1), perlmod(1), perlre(1), pipe(2), sh(1), snmptrap(1), stat(2), strftime(3), syslog(3), time(2), umask(2)