Skip to main content

Product Updates

See what’s new at our product, check the updates below

Object-centric process mining - Changes to SQL validation for transformations (2024-10-28)

 We've made some changes to how we parse the SQL for object-centric transformation scripts when you publish them. We'll validate more things when you publish, instead of when you run the transformations, which makes the errors easier to identify and fix or mitigate. These changes apply to both your custom transformation scripts and our supplied transformation scripts for Celonis catalog processes. A phased rollout of the changes to teams starts from now.As a result of these changes, when you publish your object-centric data model, you might see new validation errors that you weren't getting before. Here's what you might see, and how to fix or handle it: We'll now always add parentheses to expressions when we output the transformations. This might mean that an expression now fails to evaluate, or evaluates to a different result. To fix this error, in your custom scripts, follow best practice and include parentheses in SQL expressions where the order can be ambiguous. We'll now validate that each column data type supplied from your source system matches, or can be assigned to, the required data type for the attribute in our underlying database. The data types we use are Boolean, long, float, timestamp and string. You might see a new issue if a column in your source system data has a data type that our Celonis catalog transformations don't expect. To fix these errors, use the suggestions in Troubleshooting data extraction and pre-processing to account for the unexpected column data types. To handle these errors, activate the Skip missing data option for the data connection, as described in Skipping missing data for objects and events. We'll cast the data types to match the expected data types. Note that it's best to fix the errors as this option might introduce other unexpected issues. It's possible that the transformation might fail to run even with the Skip missing data option enabled if a data type can't be cast to the one required. If that's the case, you'll need to fix the data type.

Object-centric process mining - Changes to SQL validation for transformations (2024-10-28)

We've made some changes to how we parse the SQL for object-centric transformation scripts when you publish them. We'll validate more things when you publish, instead of when you run the transformations, which makes the errors easier to identify and fix or mitigate. These changes apply to both your custom transformation scripts and our supplied transformation scripts for Celonis catalog processes. A phased rollout of the changes to teams starts from now.As a result of these changes, when you publish your object-centric data model, you might see new validation errors that you weren't getting before. Here's what you might see, and how to fix or handle it: We'll now always add parentheses to expressions when we output the transformations. This might mean that an expression now fails to evaluate, or evaluates to a different result. To fix this error, in your custom scripts, follow best practice and include parentheses in SQL expressions where the order can be ambiguous. We'll now validate that each column data type supplied from your source system matches, or can be assigned to, the required data type for the attribute in our underlying database. The data types we use are Boolean, long, float, timestamp and string. You might see a new issue if a column in your source system data has a data type that our Celonis catalog transformations don't expect. To fix these errors, use the suggestions in Troubleshooting data extraction and pre-processing to account for the unexpected column data types. To handle these errors, activate the Skip missing data option for the data connection, as described in Skipping missing data for objects and events. We'll cast the data types to match the expected data types. Note that it's best to fix the errors as this option might introduce other unexpected issues. It's possible that the transformation might fail to run even with the Skip missing data option enabled if a data type can't be cast to the one required. If that's the case, you'll need to fix the data type.

Enhanced variables and updates to existing Knowledge Model variables (2024-10-10)

You can now create and use enhanced variables in your Studio content. Enhanced variables allow you to centrally create and manage information that is referenced and reused across components and assets in Studio. They act as placeholders for information, either based on dynamically inserted context (such as company names, countries, and sales orders) or with manual input by the app user (such as entering the cost of an item).There are two types of enhanced variables: Enhanced View variables: These are specific to individual Views and can't be reused across Views in the same package. To learn how to create and manage enhanced View variables, see: Creating and managing enhanced View variables. Enhanced Knowledge Model variables: These can be used wherever the Knowledge Model is being used, as such can be reused across Views, Packages, and Spaces. To learn how to create and manage Knowledge Model variables, see: Creating and managing enhanced Knowledge Model variables.   In addition, you can now view and manager your variable state while editing your View. The variable state represents the current value of the variable for the user. Initially, this is the default value, but it may change when the user interacts with the application.   Existing Knowledge Model variables What were previously referred to as 'Knowledge Model variables' are now known as 'Legacy Knowledge Model variables'. These legacy Knowledge Model variables can only be used when creating legacy Studio Views. You can continue to create and manage your legacy Knowledge Model variables directly in the Knowledge Model:   For more information about legacy Knowledge Model variables, see: Variables (legacy views only).  

Use standalone object types and groups in perspectives (2024-10-07)

You no longer have to connect all the object types in a perspective together. Previously, we didn't allow objects in a saved perspective if there wasn't a path of relationships between them and all the other objects, with the exceptions of the CurrencyConversion and QuantityConversion helper objects, and the master data object MaterialMasterPlant. Now, you can have a perspective that includes standalone objects, and distinct groups of objects that are connected to each other but not to other groups. When you save it, we'll give you a warning message to let you know there are object types that are not interconnected, but you can still save and use the perspective. For the instructions to create custom perspectives, see Creating custom perspectives and event logs.Allowing standalone objects and distinct groups means you can save a partly finished perspective to work more on later. It also means you can include standalone helper objects that are not in the Celonis catalog, such as a factory calendar table, workday or weekday calendar, and alternative quantity or currency conversion tables. You can set any object as the lead object in event logs, including the default event log, and you can include standalone or grouped objects in an extension to a Celonis catalog perspective.A main reason that we disallowed standalone objects and distinct groups previously was that with a single data pool for objects and events, you could only restrict data access through setting data permissions on a perspective. A disconnected object in the data model was a risk because it would not be subject to the same rules as the connected objects. This is still the case, but now if you require strict control of end users' access to data, you can use multiple data pools for objects and events. Give users access to a data pool where only the permitted data is shared with the object-centric data model. If you prefer to use a single data pool, and you are setting data permissions for a perspective that contains any standalone objects or distinct groups of objects, check your data permissions carefully. For more on this, see Data permissions for object-centric process mining.