Skip to main content
  • 483 Product updates

Object-centric process mining - Changes to SQL validation for transformations (2024-10-28)

 We've made some changes to how we parse the SQL for object-centric transformation scripts when you publish them. We'll validate more things when you publish, instead of when you run the transformations, which makes the errors easier to identify and fix or mitigate. These changes apply to both your custom transformation scripts and our supplied transformation scripts for Celonis catalog processes. A phased rollout of the changes to teams starts from now.As a result of these changes, when you publish your object-centric data model, you might see new validation errors that you weren't getting before. Here's what you might see, and how to fix or handle it: We'll now always add parentheses to expressions when we output the transformations. This might mean that an expression now fails to evaluate, or evaluates to a different result. To fix this error, in your custom scripts, follow best practice and include parentheses in SQL expressions where the order can be ambiguous. We'll now validate that each column data type supplied from your source system matches, or can be assigned to, the required data type for the attribute in our underlying database. The data types we use are Boolean, long, float, timestamp and string. You might see a new issue if a column in your source system data has a data type that our Celonis catalog transformations don't expect. To fix these errors, use the suggestions in Troubleshooting data extraction and pre-processing to account for the unexpected column data types. To handle these errors, activate the Skip missing data option for the data connection, as described in Skipping missing data for objects and events. We'll cast the data types to match the expected data types. Note that it's best to fix the errors as this option might introduce other unexpected issues. It's possible that the transformation might fail to run even with the Skip missing data option enabled if a data type can't be cast to the one required. If that's the case, you'll need to fix the data type.

Object-centric process mining - Changes to SQL validation for transformations (2024-10-28)

We've made some changes to how we parse the SQL for object-centric transformation scripts when you publish them. We'll validate more things when you publish, instead of when you run the transformations, which makes the errors easier to identify and fix or mitigate. These changes apply to both your custom transformation scripts and our supplied transformation scripts for Celonis catalog processes. A phased rollout of the changes to teams starts from now.As a result of these changes, when you publish your object-centric data model, you might see new validation errors that you weren't getting before. Here's what you might see, and how to fix or handle it: We'll now always add parentheses to expressions when we output the transformations. This might mean that an expression now fails to evaluate, or evaluates to a different result. To fix this error, in your custom scripts, follow best practice and include parentheses in SQL expressions where the order can be ambiguous. We'll now validate that each column data type supplied from your source system matches, or can be assigned to, the required data type for the attribute in our underlying database. The data types we use are Boolean, long, float, timestamp and string. You might see a new issue if a column in your source system data has a data type that our Celonis catalog transformations don't expect. To fix these errors, use the suggestions in Troubleshooting data extraction and pre-processing to account for the unexpected column data types. To handle these errors, activate the Skip missing data option for the data connection, as described in Skipping missing data for objects and events. We'll cast the data types to match the expected data types. Note that it's best to fix the errors as this option might introduce other unexpected issues. It's possible that the transformation might fail to run even with the Skip missing data option enabled if a data type can't be cast to the one required. If that's the case, you'll need to fix the data type.

Enhanced variables and updates to existing Knowledge Model variables (2024-10-10)

You can now create and use enhanced variables in your Studio content. Enhanced variables allow you to centrally create and manage information that is referenced and reused across components and assets in Studio. They act as placeholders for information, either based on dynamically inserted context (such as company names, countries, and sales orders) or with manual input by the app user (such as entering the cost of an item).There are two types of enhanced variables: Enhanced View variables: These are specific to individual Views and can't be reused across Views in the same package. To learn how to create and manage enhanced View variables, see: Creating and managing enhanced View variables. Enhanced Knowledge Model variables: These can be used wherever the Knowledge Model is being used, as such can be reused across Views, Packages, and Spaces. To learn how to create and manage Knowledge Model variables, see: Creating and managing enhanced Knowledge Model variables.   In addition, you can now view and manager your variable state while editing your View. The variable state represents the current value of the variable for the user. Initially, this is the default value, but it may change when the user interacts with the application.   Existing Knowledge Model variables What were previously referred to as 'Knowledge Model variables' are now known as 'Legacy Knowledge Model variables'. These legacy Knowledge Model variables can only be used when creating legacy Studio Views. You can continue to create and manage your legacy Knowledge Model variables directly in the Knowledge Model:   For more information about legacy Knowledge Model variables, see: Variables (legacy views only).