Skip to main content

I would like to know how the "data_consumption_updated_events updates" works, since for some tables the "_CELONIS_CHANGE_DATE" is related to a very old date, and this field should show the current DateTime for each table that I want to track. 

 

So the data consumption in the monitoring for each table is different from the one that is being generated in the custom monitoring pool. I would like to understand the reason for this.

Hola Andres! Enabling Data Monitoring Pool creates a set of data jobs and tables that will need to run continuously to update the Data Monitoring apps. I'd first check to see that Data Monitoring pool jobs are indeed running to collect and store logging data. If the logging appears to be working correctly, then possibly the operational jobs that are failing are not being included in the Data Monitoring transformations somehow. Making sure the data monitoring jobs are actually running is where I would start.


Hi Chris! actually, they're running well, that was my first starting point. However, it seems that in the custom Monitoring the total data consumption is bigger than the Monitoring that comes by default.

 

I'm trying to figure out what tables are being tracked in the custom data consumption that are not in the default monitoring.


There is a Data Pool Parameter for the Data Monitoring data pool that will determine how frequently the Data Pipeline Monitoring data jobs run (TIME_SPAN_DATA_PIPELINE). You will want to check the logs in the Data Monitoring pool for the Data Pipeline Monitoring data jobs specifically for failures and/or last time the schedule ran. Then check the last time the Data Pipeline Monitoring data model was loaded successfully and whether or not any records were loaded.


There is a Data Pool Parameter for the Data Monitoring data pool that will determine how frequently the Data Pipeline Monitoring data jobs run (TIME_SPAN_DATA_PIPELINE). You will want to check the logs in the Data Monitoring pool for the Data Pipeline Monitoring data jobs specifically for failures and/or last time the schedule ran. Then check the last time the Data Pipeline Monitoring data model was loaded successfully and whether or not any records were loaded.

Thanks Chis! I did some research regarding this parameter, I noticed that the "data_consumption_updated_events" is a root table from Celonis that doesn't use any of the parameters in the custom monitoring data pool.

 

The (TIME_SPAN_DATA_PIPELINE) parameter and the "data_consumption_updated_events" table are being used to create some transformations, but the data that I want to validate is coming directly from the "data_consumption_updated_events".


Thanks Chis! I did some research regarding this parameter, I noticed that the "data_consumption_updated_events" is a root table from Celonis that doesn't use any of the parameters in the custom monitoring data pool.

 

The (TIME_SPAN_DATA_PIPELINE) parameter and the "data_consumption_updated_events" table are being used to create some transformations, but the data that I want to validate is coming directly from the "data_consumption_updated_events".

Data Consumption Monitoring must be running as well to update those base tables. How does the log look for that data job/data model processing?


Thanks Chis! I did some research regarding this parameter, I noticed that the "data_consumption_updated_events" is a root table from Celonis that doesn't use any of the parameters in the custom monitoring data pool.

 

The (TIME_SPAN_DATA_PIPELINE) parameter and the "data_consumption_updated_events" table are being used to create some transformations, but the data that I want to validate is coming directly from the "data_consumption_updated_events".

hourly


Hi @andres.naran12,

 

One of our clients also had difficulties with updating the data consumption (both in front-end monitoring and the custom monitoring pool), and Celonis told it was a bug they were working on. It could be that this is applicable to you too. If you want to be sure, please raise a support ticket to validate this.

 

Best regards,

Jan-peter


Reply