Want to check how do we extract the Log file for the loads we have extracted in data jobs. I have run a full load and I wanted to compare the number of rows extracted by table and time taken for each table between last load and latest load.
I know we have doumentation "https://docs.celonis.com/en/viewing-data-job-execution-history.html" but this just explains how we can view the details
Hi Siva Bahadur,
Is this the way you want to extract the log file of loads ???
Hi Siva Bahadur,
Is this the way you want to extract the log file of loads ???
Hi Akshay,
Lets say I ran the extraction I see the below details in each table instead of checking in individually i want to see as one extracted report like Audit Log we have that function
Hi Siva bahadur,
So from the log you only need to retrieve data regarding time taken and total records for each table and combine all together as a single log file. So that for the next extraction run you can do the same and compare both. Right?
Hi Siva bahadur,
So from the log you only need to retrieve data regarding time taken and total records for each table and combine all together as a single log file. So that for the next extraction run you can do the same and compare both. Right?
Correct
Hi Siva Bahadur,
It is possible to extract the way you wanted and above is a preview of log data extracted for a single extraction job which I did couple of minutes before using python and ML workbench. The issue lies in the complex logic of the python code and the way we have to design the API for retrieving this data(as API calls are also needed in the code).Below is the logic of the code for retrieving this data and saving it as a text file in the ML workbench( further modifications can be done to your code according to your preferences).
code:
#packages required
from pycelonis import get_celonis
Import os
#initializations
base_url = "Your EMS url"
celonis = get_celonis(url=base_url, key_type='APP_KEY', permisssions=False)
data_pool = celonis.pool.find("your extraction datapool name")
source_directory = "data" #directory for saving the OP text files
filename = os.path.join(source_directory, "combined_logs.txt")
result_data = =]
#Looping through djs,extractions and tables
for data_job in data_pool.data_jobs:
for extr in data_job.extractions:
for tb in extr.tables:
API_URL = "You should build the API ,To know what should be the structure of the API, go to the logs of one extraction job>right click>inspect>network, below that you can see the API and how the API is designed)"
json_response = celonis.api_request(API_URL, method='auto', message=None, timeout='default')
for log_entry in json_response.get('logMessages', #]):
log_message = log_entry.get('logMessage')
if log_message is not None and ('Extraction execution job finished' in log_message or f'records for table {table_name}' in log_message):
result_data.append(log_message) # Append log message directly
# Ensure the directory exists before writing
os.makedirs(os.path.dirname(filename), exist_ok=True)
# Write all log messages to the combined file
with open(filename, 'a') as file:
for log_message in result_data:
file.write(log_message + '\\n')
print(f'Data saved to {filename}')
NB: I am using pycelonis 1.7.2
Thanks,
Akshay
Hi Siva Bahadur,
It is possible to extract the way you wanted and above is a preview of log data extracted for a single extraction job which I did couple of minutes before using python and ML workbench. The issue lies in the complex logic of the python code and the way we have to design the API for retrieving this data(as API calls are also needed in the code).Below is the logic of the code for retrieving this data and saving it as a text file in the ML workbench( further modifications can be done to your code according to your preferences).
code:
#packages required
from pycelonis import get_celonis
Import os
#initializations
base_url = "Your EMS url"
celonis = get_celonis(url=base_url, key_type='APP_KEY', permisssions=False)
data_pool = celonis.pool.find("your extraction datapool name")
source_directory = "data" #directory for saving the OP text files
filename = os.path.join(source_directory, "combined_logs.txt")
result_data = =]
#Looping through djs,extractions and tables
for data_job in data_pool.data_jobs:
for extr in data_job.extractions:
for tb in extr.tables:
API_URL = "You should build the API ,To know what should be the structure of the API, go to the logs of one extraction job>right click>inspect>network, below that you can see the API and how the API is designed)"
json_response = celonis.api_request(API_URL, method='auto', message=None, timeout='default')
for log_entry in json_response.get('logMessages', #]):
log_message = log_entry.get('logMessage')
if log_message is not None and ('Extraction execution job finished' in log_message or f'records for table {table_name}' in log_message):
result_data.append(log_message) # Append log message directly
# Ensure the directory exists before writing
os.makedirs(os.path.dirname(filename), exist_ok=True)
# Write all log messages to the combined file
with open(filename, 'a') as file:
for log_message in result_data:
file.write(log_message + '\\n')
print(f'Data saved to {filename}')
NB: I am using pycelonis 1.7.2
Thanks,
Akshay
Thank you Akshay,
Let me try and come back. Anyway appreciate your support and effort to find the solution. Cheers
Reply
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.