Provided by: libmessage-passing-perl_0.117-1_all 

NAME
Message::Passing::Manual::Cookbook - Common recipies
Aggregating logs
Logging from an application.
You can use Log::Dispatch, or any log system which will output into Log::Dispatch.
use Log::Dispatch;
use Log::Dispatch::Message::Passing;
use Message::Passing::Filter::Encoder::JSON;
use Message::Passing::Output::ZeroMQ;
my $log = Log::Dispatch->new;
$log->add(Log::Dispatch::Message::Passing->new(
name => 'myapp_aggregate_log',
min_level => 'debug',
output => Message::Passing::Filter::Encoder::JSON->new(
output_to => Message::Passing::Output::ZeroMQ->new(
connect => 'tcp://192.168.0.1:5558',
),
),
));
$log->warn($_) for qw/ foo bar baz /;
Aggregating this log
As simple as using the command line interface:
message-pass --input ZeroMQ --input_options '{"socket_bind":"tcp://192.168.0.1:5558"}' \
--output File --output_options '{"filename":"/tmp/mylog"}'
And you've now got a multi-host log aggregation system for your application!
Doing it manually
You don't have to do any of the above, if you don't want to - you can easily reuse the ZeroMQ output
yourself:
my $log = Message::Passing::Output::ZeroMQ->new(
connect => 'tcp://192.168.0.1:5558',
linger => 1, # make sure message is sent (flushed) before thread dies
);
$log->consume("A log message");
A note about outputs
ZeroMQ is the recommended output for sending messages from within your application. This is because
ZeroMQ uses a different (POSIX) thread to send messages - meaning that it transports messages
independently to whatever your perl code is doing.
This is not the case for other message outputs, and therefore they are unlikely to work well, or at all,
unless your application is already asynchronous and using an AnyEvent supported event library.
A note about ZeroMQ
By default Message::Passing::ZeroMQ will use PUB/SUB sockets for logging, with a finite 'high water
mark'.
This means that if your application logs significantly more data than you can fit down the network, you
will drop logs.
If your application needs to do this, you can either increase this high water mark, or disable it (so
ZeroMQ will buffer an infinite number of messages at the sending client - potentially using infinite
RAM).
The default setting is for the output to buffer up to 10000 messages on the output side, which should be
enough to manage short term peaks, but is low enough to be reasonably safe in terms of memory consumption
for buffering
Aggregating syslog
Assuming that you've got a regular syslogd setup and working, then you probably want to keep that.
Having some of the log files on individual hosts can be very useful. Also, we'd like to avoid the script
being a privileged user (which would be needed to get the standard port).
Therefore, we'll run a syslog listener on a high port (5140), and get the regular syslogd to ship
messages to it. The listener will then forward from each host to a central aggregate logger (which is
setup as above).
On host collector
message-pass --input Syslog --output ZeroMQ --output_options '{"connect":"tcp://192.168.0.1:5558"}'
Configuring your syslogd
This should be easy, here's an example of what to add to rsyslogd.conf to get the syslog resent.
*.* =192.168.0.1:5140
Aggregating everything
If you have hosts with both applications and syslog that you want to aggregate, then you can easily do
both at once. This also means that your apps ship logs to a local buffer process rather than directly
across the network - which is more resilient to short network outages.
AUTHOR, COPYRIGHT & LICENSE
See Message::Passing.
perl v5.32.1 2021-11-09 Message::Passin...anual::Cookbook(3pm)