h1. Rationale and Context
The application XXX is a legacy application that produces important and different stacks of logs.
Each log is a positional file, where information is ordered by column, each column having the same size and separed by a _tabulation_ character.
Due to recent hardware failures, it was decided these logs should be backed up periodically and in a more robust way.
In addition, a new application upgrade is planned, which consists in using a new monitoring console with this application.
Eventually, the Information System Manager wants to set up additional (BPEL) processes, which may have to backup these logs in case of emergency situations.
Hence, for a long-term storage, and in order to make the application events visible in this new monitoring console, it was decided the main information from these logs would have to be archived every 3 days in a data center. This data center will be accessed by the monitoring console to display the events the user queries. The log storage will also act as a safety measure in case where other hardware failures would occur.
h1. Solution
The logs processing was decided to be handled with Talend Open Studio, while the BPEL processes will have to run inside Petals ESB.
The requirements naturally led to a native integration between both products.
Besides, it was proposed that the execution schedule would be defined in Petals.
Thus, with this solution, the logs are loaded, filtered and written to a database through a Talend job.
This Talend job is exposed as a service into Petals, using the *Petals-SE-Talend* component. This way, it can be called by any Petals service, including a BPEL process.
And it can also be scheduled to be executed on a regular basis, every day in this case.
The execution schedule is handled by the *Petals-SE-Quartz* component.
Every 3 days, at the same time, it will send a message to the Talend job, so that the log transfer is made.
\\
{info}
This use case only describes the way the logs are saved periodically.
{info}
h1. Configuration for the Petals-SE-Talend component
h2. Creating the job
The job is made up of three Talend components:
* The *tFileList* components gets all the \*.log files from the directory
{noformat}
System.getProperty( "user.home" ) + "/logs"
{noformat}
* For every found log file (_iterate_ connection), the *tFileInputPositional* component loads its content.
* The *tMysqlOutput* component receives the content from a log file and sends it into the table *logs* from the *logstorage* database.
\\
Here is what the job looks like.
!BackupLogs.jpg!
\\
Here are the properties of the tFileList component.
!BackupLogs_tFileList.jpg!
\\
Here are the properties of the tFileInputPositional component.
!BackupLogs_tFileInputDelimited.jpg!
\\
Here are the properties of the tMysqlOutput component.
!BackupLogs_tMySqlOutput.jpg!
\\
Eventually, here is the schema of the last two components.
!BackupLogs_tMySqlOutput_Schema.jpg!
h2. Exporting the job for Petals
Here are the selected options to export Petals (that's the default ones).
!BackupLogs_export.jpg!
h1. Configuration for the Petals-SE-Quartz component
Exposing the Talend job as a service inside Petals allows it to be called by any client.
This features allows the use of the Petals-SE-Quartz component.
Every 3 days, this component will be in charge of sending a message to trigger the execution of the BackupLogs job.
\\
The job will be called every three days, let's at noon. It corresponds to the following CRON expression.
{noformat}
0 0 12 1/3 * ?
{noformat}
\\
And the XML message to send can be found, as an example, with SoapUI.
In this case, no parameter is required.
{code:lang=xml}
<tal:executeJob>
<tal:contexts/>
<tal:in-attachments/>
</tal:executeJob>
{code}
\\
The configuration below was generated with the Petals Studio for the version 1.1 of the Petals-SE-Quartz component.
{code:lang=xml}
<?xml version="1.0" encoding="UTF-8"?>
<!--
JBI descriptor for the Petals' "petals-se-quartz" component (Quartz).
Originally created for the version 1.1 of the component.
-->
<jbi:jbi version="1.0"
xmlns:generatedNs="http://petals.ow2.org/components/ftp/version-3"
xmlns:jbi="http://java.sun.com/xml/ns/jbi"
xmlns:petalsCDK="http://petals.ow2.org/components/extensions/version-5"
xmlns:quartz="http://petals.ow2.org/components/quartz/version-1"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<!-- Import a Service into Petals or Expose a Petals Service => use a BC. -->
<jbi:services binding-component="false">
<!-- Expose a Petals Service => consumes a Service. -->
<jbi:consumes
interface-name="itfNs:BackupLogsServicePortType"
service-name="srvNs:BackupLogsService"
endpoint-name="BackupLogsEndpoint"
xmlns:srvNs="http://petals.ow2.org/talend/"
xmlns:itfNs="http://petals.ow2.org/talend/">
<!-- CDK specific elements -->
<petalsCDK:timeout>30000</petalsCDK:timeout>
<petalsCDK:mep xsi:nil="true" />
<!-- Component specific elements -->
<quartz:cron-expression>0 0 12 1/3 * ?</quartz:cron-expression>
<quartz:content><![CDATA[<tal:executeJob>
<tal:contexts/>
<tal:in-attachments/>
</tal:executeJob>]]></quartz:content>
</jbi:consumes>
</jbi:services>
</jbi:jbi>
{code}
h1. Running the use case
To test this use case, the first valuable thing to do would be to change the CRON expression, so that the jobs is executed every minute instead of every day.
*0 1 \* \* \* \** should work.
\\
Then, add a log file in the directory the *tFileList* will list.
As an example, you can use the following log sample. It will be inserted in the database (let the job create the table).
{noformat}
07-07-2007 07:45:53 error CATALO An error occured while taking a command.
03-06-2009 16:42:50 info USERS New user registered.
11-04-2009 06:39:51 error 8E1gMN W4mcL2CIpTLl0cOHumMvIJ8gaF9m0cUWFAyyRBGv3
11-04-2007 07:23:48 warning gaINXp mrEyySARH6Yc3tMWFzlFO2bYUpchwekQI43xD83G4
05-05-2009 13:00:47 info CxoGez N5vkw9F4jczRRfj807ZXEyvi86pAhkASAsb5b2a95
01-03-2009 19:33:30 info aM8JZo lKDMq5GF4syj2NTlSkfAA4kMtR5ASXzLaCNlJDrg6
23-06-2009 20:11:07 warning 3mIzm1 yIKmz0xUYW4058hUe2v0dq8to4JAutbG3ONAohVL7
29-05-2009 10:23:11 info wfyrV2 1erGxKQyDSNffb2wFII43OA1L6rMsdeV28mZKZVK8
17-02-2008 09:41:24 info AsZFLL 45eJQpAYKhxGLiA6KYhgRsfM5Ltu4gbIgFBxyh2P9
09-04-2008 01:52:37 warning UllFc9 J8x1CMJbkSYgx0KBLlMNFtqAPixPVYA4VgYZagqP10
14-12-2008 18:17:25 error GUpq0U CNw6lbzMLcZ5ixugAM7Zmna8AclbXAS6JKvVZ6us11
08-01-2008 05:26:04 info qde3jr O41PMrbdetJfFaYKXkGA4EufaTyjFjl1L6KUFb3H12
24-08-2008 01:57:30 error Guuyun GOdUV6h2uQT6BBiIkXBNweQ37xKqiycxzN3cNJz013
15-02-2009 01:35:17 warning RcK0An 3XfRFHZVxmPWyWYYxwUPEIdS0UfVA4Lohv8mH4sf14
10-10-2009 01:37:54 info gKDXsQ sGp7W1lHIcEMmRVplXubIEhiuPyvqeM0uitH8kFu15
27-12-2007 20:24:55 info HpcAdh rEb8afMwiEATnoiDSjNxeY5flbG2o30AIasBOJCR16
26-09-2008 18:02:58 warning mHulRI G0FiOoaHh54ip5V5SyYacQiKgyckCN0Z1CEdw5Yo17
{noformat}
\\
Deploy the *Petals-SE-Quartz* and *Petals-SE-Talend* components in Petals, as well as the two service-units.
Wait for a minute and check the Petals console and the database.
The console should display information about the request processing in the *Petals-SE-Talend* component.
The database should see the logs table created and filled-in.
The application XXX is a legacy application that produces important and different stacks of logs.
Each log is a positional file, where information is ordered by column, each column having the same size and separed by a _tabulation_ character.
Due to recent hardware failures, it was decided these logs should be backed up periodically and in a more robust way.
In addition, a new application upgrade is planned, which consists in using a new monitoring console with this application.
Eventually, the Information System Manager wants to set up additional (BPEL) processes, which may have to backup these logs in case of emergency situations.
Hence, for a long-term storage, and in order to make the application events visible in this new monitoring console, it was decided the main information from these logs would have to be archived every 3 days in a data center. This data center will be accessed by the monitoring console to display the events the user queries. The log storage will also act as a safety measure in case where other hardware failures would occur.
h1. Solution
The logs processing was decided to be handled with Talend Open Studio, while the BPEL processes will have to run inside Petals ESB.
The requirements naturally led to a native integration between both products.
Besides, it was proposed that the execution schedule would be defined in Petals.
Thus, with this solution, the logs are loaded, filtered and written to a database through a Talend job.
This Talend job is exposed as a service into Petals, using the *Petals-SE-Talend* component. This way, it can be called by any Petals service, including a BPEL process.
And it can also be scheduled to be executed on a regular basis, every day in this case.
The execution schedule is handled by the *Petals-SE-Quartz* component.
Every 3 days, at the same time, it will send a message to the Talend job, so that the log transfer is made.
\\
{info}
This use case only describes the way the logs are saved periodically.
{info}
h1. Configuration for the Petals-SE-Talend component
h2. Creating the job
The job is made up of three Talend components:
* The *tFileList* components gets all the \*.log files from the directory
{noformat}
System.getProperty( "user.home" ) + "/logs"
{noformat}
* For every found log file (_iterate_ connection), the *tFileInputPositional* component loads its content.
* The *tMysqlOutput* component receives the content from a log file and sends it into the table *logs* from the *logstorage* database.
\\
Here is what the job looks like.
!BackupLogs.jpg!
\\
Here are the properties of the tFileList component.
!BackupLogs_tFileList.jpg!
\\
Here are the properties of the tFileInputPositional component.
!BackupLogs_tFileInputDelimited.jpg!
\\
Here are the properties of the tMysqlOutput component.
!BackupLogs_tMySqlOutput.jpg!
\\
Eventually, here is the schema of the last two components.
!BackupLogs_tMySqlOutput_Schema.jpg!
h2. Exporting the job for Petals
Here are the selected options to export Petals (that's the default ones).
!BackupLogs_export.jpg!
h1. Configuration for the Petals-SE-Quartz component
Exposing the Talend job as a service inside Petals allows it to be called by any client.
This features allows the use of the Petals-SE-Quartz component.
Every 3 days, this component will be in charge of sending a message to trigger the execution of the BackupLogs job.
\\
The job will be called every three days, let's at noon. It corresponds to the following CRON expression.
{noformat}
0 0 12 1/3 * ?
{noformat}
\\
And the XML message to send can be found, as an example, with SoapUI.
In this case, no parameter is required.
{code:lang=xml}
<tal:executeJob>
<tal:contexts/>
<tal:in-attachments/>
</tal:executeJob>
{code}
\\
The configuration below was generated with the Petals Studio for the version 1.1 of the Petals-SE-Quartz component.
{code:lang=xml}
<?xml version="1.0" encoding="UTF-8"?>
<!--
JBI descriptor for the Petals' "petals-se-quartz" component (Quartz).
Originally created for the version 1.1 of the component.
-->
<jbi:jbi version="1.0"
xmlns:generatedNs="http://petals.ow2.org/components/ftp/version-3"
xmlns:jbi="http://java.sun.com/xml/ns/jbi"
xmlns:petalsCDK="http://petals.ow2.org/components/extensions/version-5"
xmlns:quartz="http://petals.ow2.org/components/quartz/version-1"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<!-- Import a Service into Petals or Expose a Petals Service => use a BC. -->
<jbi:services binding-component="false">
<!-- Expose a Petals Service => consumes a Service. -->
<jbi:consumes
interface-name="itfNs:BackupLogsServicePortType"
service-name="srvNs:BackupLogsService"
endpoint-name="BackupLogsEndpoint"
xmlns:srvNs="http://petals.ow2.org/talend/"
xmlns:itfNs="http://petals.ow2.org/talend/">
<!-- CDK specific elements -->
<petalsCDK:timeout>30000</petalsCDK:timeout>
<petalsCDK:mep xsi:nil="true" />
<!-- Component specific elements -->
<quartz:cron-expression>0 0 12 1/3 * ?</quartz:cron-expression>
<quartz:content><![CDATA[<tal:executeJob>
<tal:contexts/>
<tal:in-attachments/>
</tal:executeJob>]]></quartz:content>
</jbi:consumes>
</jbi:services>
</jbi:jbi>
{code}
h1. Running the use case
To test this use case, the first valuable thing to do would be to change the CRON expression, so that the jobs is executed every minute instead of every day.
*0 1 \* \* \* \** should work.
\\
Then, add a log file in the directory the *tFileList* will list.
As an example, you can use the following log sample. It will be inserted in the database (let the job create the table).
{noformat}
07-07-2007 07:45:53 error CATALO An error occured while taking a command.
03-06-2009 16:42:50 info USERS New user registered.
11-04-2009 06:39:51 error 8E1gMN W4mcL2CIpTLl0cOHumMvIJ8gaF9m0cUWFAyyRBGv3
11-04-2007 07:23:48 warning gaINXp mrEyySARH6Yc3tMWFzlFO2bYUpchwekQI43xD83G4
05-05-2009 13:00:47 info CxoGez N5vkw9F4jczRRfj807ZXEyvi86pAhkASAsb5b2a95
01-03-2009 19:33:30 info aM8JZo lKDMq5GF4syj2NTlSkfAA4kMtR5ASXzLaCNlJDrg6
23-06-2009 20:11:07 warning 3mIzm1 yIKmz0xUYW4058hUe2v0dq8to4JAutbG3ONAohVL7
29-05-2009 10:23:11 info wfyrV2 1erGxKQyDSNffb2wFII43OA1L6rMsdeV28mZKZVK8
17-02-2008 09:41:24 info AsZFLL 45eJQpAYKhxGLiA6KYhgRsfM5Ltu4gbIgFBxyh2P9
09-04-2008 01:52:37 warning UllFc9 J8x1CMJbkSYgx0KBLlMNFtqAPixPVYA4VgYZagqP10
14-12-2008 18:17:25 error GUpq0U CNw6lbzMLcZ5ixugAM7Zmna8AclbXAS6JKvVZ6us11
08-01-2008 05:26:04 info qde3jr O41PMrbdetJfFaYKXkGA4EufaTyjFjl1L6KUFb3H12
24-08-2008 01:57:30 error Guuyun GOdUV6h2uQT6BBiIkXBNweQ37xKqiycxzN3cNJz013
15-02-2009 01:35:17 warning RcK0An 3XfRFHZVxmPWyWYYxwUPEIdS0UfVA4Lohv8mH4sf14
10-10-2009 01:37:54 info gKDXsQ sGp7W1lHIcEMmRVplXubIEhiuPyvqeM0uitH8kFu15
27-12-2007 20:24:55 info HpcAdh rEb8afMwiEATnoiDSjNxeY5flbG2o30AIasBOJCR16
26-09-2008 18:02:58 warning mHulRI G0FiOoaHh54ip5V5SyYacQiKgyckCN0Z1CEdw5Yo17
{noformat}
\\
Deploy the *Petals-SE-Quartz* and *Petals-SE-Talend* components in Petals, as well as the two service-units.
Wait for a minute and check the Petals console and the database.
The console should display information about the request processing in the *Petals-SE-Talend* component.
The database should see the logs table created and filled-in.