Backup Logs Periodically

compared with
Key
This line was removed.
This word was removed. This word was added.
This line was added.

Changes (15)

View Page History
h1. Rationale and Context

The application XXX is a legacy application that produces important and different stacks of logs.
The application XXX is a legacy application that produces important and different stacks of logs. Each log is a positional file, where information is ordered by column, each column having the same size and separed by a _tabulation_ character.
To track events on a long term, and improve the safety of these logs, it was decided the main information from these logs have to be archived every day in a data center.

Besides, additional processes and procedures, relying on Petals ESB, are set up around this application.
In particular, BPEL processes are defined and foreseen to backup the logs if an emergency event occurs.
Due to recent hard-drive failures, it was decided these logs should be backed up periodically and in a more robust way.
In addition, a new application upgrade is planned, which consists in using a new monitoring console with this application.
Eventually, the Information System Manager wants to set up additional (BPEL) processes, which may have to backup these logs in case of emergency situations.

Without this last point, running the job outside would have been, _a priori_, the best solution.
But interacting with services definitely elects the job's execution into Petals as a better solution.
Hence, for a long-term storage, and in order to make the application events visible in this new monitoring console, it was decided the main information from these logs would have to be archived every 3 days in a data center. This data center will be accessed by the monitoring console to display the events the user queries. The log storage will also act as a safety measure in case where other hardware failures would occur.

This use case only describes the way the logs are saved periodically.


h1. Solution

The logs processing was decided to be handled with Talend Open Studio, while the BPEL processes will have to run inside Petals ESB.
The requirements naturally led to a native integration between both products.
Besides, it was proposed that the execution schedule would be defined in Petals.

Thus, with this solution, the logs are loaded, filtered and written to a database through a Talend job.
This Talend job is exposed as a service into Petals. This way, it can be called by any Petals service, including a BPEL process.
And it can also be scheduled to be executed on a regular basis, every day in this case.

The execution schedule is handled by the Petals-SE-Quartz component.
Every day, at the same time, it will send a message to the Talend job.
Every 3 days, at the same time, it will send a message to the Talend job, so that the log transfer is made.

\\
{info}
This use case only describes the way the logs are saved periodically.
{info}


h1. Configuration for the Petals-SE-Quartz component