Skip to main content
Dear Celonis development team,
I tried to connect and extract data from IBC to my Colaboratory python environment using Pyceonis.
Normally it worked correctly with below scripts.
from pycelonis import get_celonis
login = {celonis_url: my_url, api_token: my_token}
celonis = get_celonis(**login)
analysis = celonis.analyses.find('id of my analysis)
sheet = analysis.published.sheets.find(id of my sheet)
component = sheet.components.find(id of my component)
df = component.get_data_frame()
But sometimes get_data_frame method returned exception.

Exception Traceback (most recent call last)
in ()
----> 1 df = component.get_data_frame()
2 df
2 frames
/usr/local/lib/python3.6/dist-packages/pycelonis/objects_base.py in get_data_file(self, pql_query, file_path, export_type, variables)
647
648 if r[exportStatus] != DONE:
> 649 raise Exception(f"Export failed. Status: {r} \\n\\n Query: {pql_query}")
650 else:
651 file_type = export_type.lower() if export_type != EXCEL else xlsx
Exception: Export failed. Status: {id: 08da3644-661e-419f-8a6d-b683b03a2a04, exportStatus: FAILED, created: 1571061484375, message: The provided query does not contain a valid table statement to be exported., exportType: PARQUET}
Query: [TABLE( VBAP.PSTYV|| - ||VBAP.PSTYV_TEXT AS #{VBAP.PSTYV}, VBAP.PSTYV|| - ||VBAP.PSTYV_TEXT AS #{VBAP.PSTYV}, KNA1.KUNNR AS #{KNA1.KUNNR}, COUNT_TABLE(VBAP) AS #SO Item FORMAT ,f, AVG(CALC_THROUGHPUT(ALL_OCCURRENCE[Process Start] TO ALL_OCCURRENCE[Process End], REMAP_TIMESTAMPS("_CEL_O2C_ACTIVITIES".EVENTTIME, DAYS))) AS Throughput Time (Days) FORMAT ,f, SUM(VBAP.NETWR_CONVERTED) AS Net Value (JPY) FORMAT ,f, AVG(PU_AVG (VBAP, CASE WHEN ISNULL("_CEL_O2C_ACTIVITIES".USER_TYPE) = 1 \\r\\n THEN NULL \\r\\n WHEN ("_CEL_O2C_ACTIVITIES".USER_TYPE = 😎 \\r\\n THEN 1.0 \\r\\n ELSE 0.0 \\r\\n END)) AS Automation Rate FORMAT .2% ) ORDER BY SUM(VBAP.NETWR_CONVERTED) DESC NOLIMIT;]
I changed column name from #{VBAP.PSTYV} to simply Item category then it worked, so there is something defect in parsing column name. Could you check program ?
Best regards,
Kazuhiko
Hi Kazuhiko,
Which version of pycelonis are you using? You can check this in pip list or by using
from pycelonis import __version__
print(__version__)

Best Regards,
Simon Riezebos
Dear Simon,
My Pycelonis version is 1.1.6.
Best regards,
Kazuhiko
Hi Kazuhiko,
When I try to reproduce this for me the #{} are always removed from the column names. In the future it will be possible to choose whether you want to use the raw names or the names with name mapping, which should make your current error impossible
For now I guess changing the name is the best option.
Best regards,
Simon
Dear Simon,
Thanks for your checking. Please let me double check whether your pycelonis version is later than 1.1.6.
Best regards
Kazuhiko
#{} is not parsed in the current version 1.1.9.
I misunderstood Simons comment. I will change column name in my analysis.
Hi Kazuhiko,
It is parsed in most cases, but in some cases the column data is not fully consistent. When #{} is used and the datamodel has name mapping enabled it usually works. We are working on a solution that covers 100% of the cases.
Best regards,
Simon
Hi @s.riezebos / @kaztakata ,
Can you please provide your suggestions on the error UnicodeDecodeError:utf-8 codec cant decode byte 0x9e in position 11:invalid start byte which I am facing currently when I am trying to execute datamodel.get_data_frame(query) .Do we have any commands like encoding=utf-8 to overcome this error.
Thank you,
Amruth Muddana

Reply