You can now configure your pie and donut charts to display more than five slices, providing greater flexibility in data visualization.However, while this feature allows for additional slices, we continue to recommend limiting charts to five slices or fewer for optimal readability.To learn more about configuring pie and donut charts in Studio, see: Charts.
You can now schedule the retrying of your data jobs when configuring the connection to your source systems. Data job task within a schedule that fail are retried a number of times based on the policy you define. You can specify a maximum number of attempts for a run and a minimal interval between attempts.For more information about scheduling your data jobs, see: Scheduling the execution of data jobs.
You can now add recipients to your data job alerts, with the Celonis Platform sending email notifications to them whenever a data job runs and meets your configured conditions. These conditions include when a data job fails, runs successfully, is skipped, or takes an extended period of time. For more information about data job alerts, see Enabling data job alerts.
We've made some changes to how we parse the SQL for object-centric transformation scripts when you publish them. We'll validate more things when you publish, instead of when you run the transformations, which makes the errors easier to identify and fix or mitigate. These changes apply to both your custom transformation scripts and our supplied transformation scripts for Celonis catalog processes. A phased rollout of the changes to teams starts from now.As a result of these changes, when you publish your object-centric data model, you might see new validation errors that you weren't getting before. Here's what you might see, and how to fix or handle it: We'll now always add parentheses to expressions when we output the transformations. This might mean that an expression now fails to evaluate, or evaluates to a different result. To fix this error, in your custom scripts, follow best practice and include parentheses in SQL expressions where the order can be ambiguous. We'll now validate that each column data type supplied from your source system matches, or can be assigned to, the required data type for the attribute in our underlying database. The data types we use are Boolean, long, float, timestamp and string. You might see a new issue if a column in your source system data has a data type that our Celonis catalog transformations don't expect. To fix these errors, use the suggestions in Troubleshooting data extraction and pre-processing to account for the unexpected column data types. To handle these errors, activate the Skip missing data option for the data connection, as described in Skipping missing data for objects and events. We'll cast the data types to match the expected data types. Note that it's best to fix the errors as this option might introduce other unexpected issues. It's possible that the transformation might fail to run even with the Skip missing data option enabled if a data type can't be cast to the one required. If that's the case, you'll need to fix the data type.
What's changing?Starting from 27th October 2024, the default timezone of Action Flows scenarios changes from London Timezone (which is UTC+0 only in Winter) to Reykjavik, which operates on UTC+0 year-round and is unaffected by Daylight Saving Time. Why are we making this change?This change makes Action Flows scheduling consistent with other services in Celonis Platform running also on UTC+0. What does it mean for me? On 27th October 2024, the current London time zone will also shift to UTC+0, so you should not expect additional changes. On 30th March 2025, during the next daylight saving time change, there will be no time shifts for your Action Flows, as they will now be consistently scheduled to UTC +0. From now on, audit logs, execution history, and incomplete execution history for Action Flows will be based on UTC+0.
You can now configure your pie and donut charts to display more than five slices, providing greater flexibility in data visualization.However, that while this feature allows for additional slices, we continue to recommend limiting charts to five slices or fewer for optimal readability.To learn more about configuring pie and donut charts in Studio, see: Charts.
We've made some changes to how we parse the SQL for object-centric transformation scripts when you publish them. We'll validate more things when you publish, instead of when you run the transformations, which makes the errors easier to identify and fix or mitigate. These changes apply to both your custom transformation scripts and our supplied transformation scripts for Celonis catalog processes. A phased rollout of the changes to teams starts from now.As a result of these changes, when you publish your object-centric data model, you might see new validation errors that you weren't getting before. Here's what you might see, and how to fix or handle it: We'll now always add parentheses to expressions when we output the transformations. This might mean that an expression now fails to evaluate, or evaluates to a different result. To fix this error, in your custom scripts, follow best practice and include parentheses in SQL expressions where the order can be ambiguous. We'll now validate that each column data type supplied from your source system matches, or can be assigned to, the required data type for the attribute in our underlying database. The data types we use are Boolean, long, float, timestamp and string. You might see a new issue if a column in your source system data has a data type that our Celonis catalog transformations don't expect. To fix these errors, use the suggestions in Troubleshooting data extraction and pre-processing to account for the unexpected column data types. To handle these errors, activate the Skip missing data option for the data connection, as described in Skipping missing data for objects and events. We'll cast the data types to match the expected data types. Note that it's best to fix the errors as this option might introduce other unexpected issues. It's possible that the transformation might fail to run even with the Skip missing data option enabled if a data type can't be cast to the one required. If that's the case, you'll need to fix the data type.
Use AI-powered Process Copilots to interact with your data Process Copilots are a new AI-enhanced Studio asset that allow you to interact conversationally with your Celonis data. Each Process Copilot is configured with one of your Knowledge Models to help you analyze your data using predefined prompts or asking your own questions to generate a response in a variety of formats.Process Copilots are available in Public Preview. If you’re interested in getting access, please contact your account rep.Each Process Copilot will only have access to the data you choose from the selected Knowledge Model. You can create multiple Process Copilots to work with different Knowledge Models or to focus on different use cases within your data.Once configured, Process Copilots can be utilized by any Apps user to answer questions regarding your data, build custom graphs and tables, or look for improvement opportunities within your data. Users can interact with a Process Copilot through quickstart questions, template prompts, dropdown lists of suggested interactions or a free text field where they can ask their own questions. Admin users can also create their own KPIs or commonly asked questions that will be pinned to the start screen as a launching point for each new session.For more information, refer to Process Copilots.
Intelligent visibility into your deduction evaluation (Limited Availability) Our new Deductions Leakage app for Accounts Receivable enables your team to reduce revenue leakage from invalid deductions. The app compares your recoveries and historical write-off decisions against industry benchmarks, and provides monitoring and root cause analysis across three dashboards: The Recovery Monitor Dashboard, which classifies your closed deduction cases to show which resulted in recoveries, write-offs, or credits, and enables root cause analysis of write-offs and invalid deductions across different dimensions. The Open Deductions Dashboard, which provides actionable insights into your open deduction cases, intelligently flagging uncoded, at-risk, and small-value cases to support your operations. The Small Value Deductions Dashboard, which shows you the data-driven cost of deduction evaluation and gives you transparency on labor productivity losses and unnecessary write-offs. The app works on the Celonis catalog Accounts Receivable process for object-centric process mining, with a few custom additions. The app is in Limited Availability - if you are implementing object-centric process mining or plan to, and want to try the app out, talk to your Celonis point of contact. For the app documentation, see Deductions Leakage app - object-centric.
After installing an App from the Celonis Marketplace, such as the Universal Starter Kit, you can now delete the dependency between the package you installed the app in and the Celonis Marketplace itself. This dependency has no active impact on your App or content, so your App still functions as intended.To view and delete your package dependencies in Studio - click Package Settings - Dependencies: For more information about creating content using Studio, see: Studio.
Our prebuilt extraction packages for object-centric process mining are now available to download from the Marketplace. Find them in the new category "Object-centric extractors". We have extractors for SAP ECC, Oracle EBS, and (on request) Oracle Fusion, which is in beta status. You can download the extractions for separate Celonis catalog processes that you’ve enabled, or a joint extractor for all of the processes in our object-centric data model. For the instructions to get started with object-centric process mining using our prebuilt extractions, transformations, and object-centric data model, see Quickstart: Extract and transform your data into objects and events.
You can now manually export your Studio or App Views as a PDF, giving you shareable versions of the data outside of the Celonis Platform. This feature includes the ability to select multiple tabs, include page numbers, and choose the scale and orientation of your PDF.To export your View as a PDF while in view mode (in Studio or Apps), click Share - Export PDF: For more information, see: Exporting Views.
The Customer Consignment Stock app is now generally available in object-centric and case-centric versions. The app automatically surfaces overdue and excess consignment materials that have been sitting in your customers' warehouses for too long, or in quantities that don’t match historic consumption levels. You can use the Action View to proactively manage consignment orders, stock levels, and billing inefficiencies. The app takes into account each individual material movement, flags any quantity that exceeds its maximum threshold or is at risk of expiration, and enables you to take targeted action to reduce stock levels. For the object-centric app documentation, see Customer Consignment Stock app - object-centric, and for the case-centric app documentation, see Customer Consignment Stock app - case-centric.
What's changingOn 27th October 2024, we will be changing the default timezone of Action Flows scenarios from London Timezone (which is only in Winter on UTC +0) to Reykjavik, which operates on UTC+00 year-round and is unaffected by Daylight Saving Time. Why are we making this changeThis change will make Action Flows scheduling consistent with other services in Celonis Platform running also on UTC + 0. What does it mean for me? On 27th October 2024, the current London time zone will also shift to UTC+0, so you should not expect additional changes. If you have business-critical processes that require adherence to a specific time zone behavior before that date, we want to offer you our support in necessary configuration changes. On 30th March 2025, during the next daylight saving time change, there will be no time shifts for your Action Flows, as they will now be consistently scheduled to UTC +0. If you have business-critical processes that require adherence to specific time zone behavior before 27th October 2024, we ask you to review and make any necessary configuration changes on your end to avoid any unexpected results. Celonis Product team is offering support to you and your Value Engineers to support smooth transitions. Reach out if you need further clarification or assistance with this update.
You can now create and use enhanced variables in your Studio content. Enhanced variables allow you to centrally create and manage information that is referenced and reused across components and assets in Studio. They act as placeholders for information, either based on dynamically inserted context (such as company names, countries, and sales orders) or with manual input by the app user (such as entering the cost of an item).There are two types of enhanced variables: Enhanced View variables: These are specific to individual Views and can't be reused across Views in the same package. To learn how to create and manage enhanced View variables, see: Creating and managing enhanced View variables. Enhanced Knowledge Model variables: These can be used wherever the Knowledge Model is being used, as such can be reused across Views, Packages, and Spaces. To learn how to create and manage Knowledge Model variables, see: Creating and managing enhanced Knowledge Model variables. In addition, you can now view and manager your variable state while editing your View. The variable state represents the current value of the variable for the user. Initially, this is the default value, but it may change when the user interacts with the application. Existing Knowledge Model variables What were previously referred to as 'Knowledge Model variables' are now known as 'Legacy Knowledge Model variables'. These legacy Knowledge Model variables can only be used when creating legacy Studio Views. You can continue to create and manage your legacy Knowledge Model variables directly in the Knowledge Model: For more information about legacy Knowledge Model variables, see: Variables (legacy views only).
You can now convert one of your custom transformations into a transformation template. Choose a custom transformation from the list in the transformations editor. Either click it to view it and select Transformation actions > Convert to template, or just select Convert to template from the context menu (the three dots) in the listing. The original transformation becomes an instance of the new transformation template. You can rename, edit, and add to the template in the same way as a transformation template you created from scratch. For more on creating and managing transformation templates, see Creating transformation templates.
You no longer have to connect all the object types in a perspective together. Previously, we didn't allow objects in a saved perspective if there wasn't a path of relationships between them and all the other objects, with the exceptions of the CurrencyConversion and QuantityConversion helper objects, and the master data object MaterialMasterPlant. Now, you can have a perspective that includes standalone objects, and distinct groups of objects that are connected to each other but not to other groups. When you save it, we'll give you a warning message to let you know there are object types that are not interconnected, but you can still save and use the perspective. For the instructions to create custom perspectives, see Creating custom perspectives and event logs.Allowing standalone objects and distinct groups means you can save a partly finished perspective to work more on later. It also means you can include standalone helper objects that are not in the Celonis catalog, such as a factory calendar table, workday or weekday calendar, and alternative quantity or currency conversion tables. You can set any object as the lead object in event logs, including the default event log, and you can include standalone or grouped objects in an extension to a Celonis catalog perspective.A main reason that we disallowed standalone objects and distinct groups previously was that with a single data pool for objects and events, you could only restrict data access through setting data permissions on a perspective. A disconnected object in the data model was a risk because it would not be subject to the same rules as the connected objects. This is still the case, but now if you require strict control of end users' access to data, you can use multiple data pools for objects and events. Give users access to a data pool where only the permitted data is shared with the object-centric data model. If you prefer to use a single data pool, and you are setting data permissions for a perspective that contains any standalone objects or distinct groups of objects, check your data permissions carefully. For more on this, see Data permissions for object-centric process mining.
You can now copy the conformance rate from Process Adherence Manager into your Studio View or use it as a KPI. This allows for better monitoring and reporting of your data. See Exploring deviations.
The slider controls in Process Explorer use two multi-sliders to explore the events of individual objects and connections within your process. In these multi-sliders, each eventlog appears as a different slider and is color coded to match the objects in the process graph. Moving a slider up will add the next most common event or connection for that specific eventlog, while sliding down removes the least frequent event or connection that is currently displayed for that eventlog. For example, you can adjust the slider for one eventlog upwards to add the next most frequent event for that eventlog to the graph while the remaining eventlogs are not changed. However, if the event being added does not connect to the events currently displayed for this eventlog, additional events will be added to connect the event being added. If a slider has been used to explore the current process, the colored square corresponding to that eventlog will be highlighted with a blue border, even if the multi-slider is collapsed (see the middle eventlog in the screenshot below). This indicates that events or connections have been added to the process flow currently shown.
Within the PQL editor, the Knowledge sidebar now shows the name of Records and Attributes as maintained in the Knowledge Model instead of their technical names in the Data Model. As a result, users can now centrally manage and maintain display names of the "Data" section in the Knowledge sidebar by editing the names of Records and Attributes. For more information about creating and managing Knowledge Models, see: Knowledge Models.
Since August, you've been able to choose when to apply a new Celonis catalog update in your team (see August 2024 Release Notes). We've now added an Auto update toggle to the Objects and Events environment so you can choose to automatically upgrade your installed Celonis catalog object types, event types, relationships, transformations, and perspectives whenever a new version is available. Be aware that updates can include Celonis catalog items that you're already using in your object-centric data model, which might get behavior changes or new attributes. Automatic updates are off by default when you enable your first Celonis catalog process. If you decide to enable automatic updates, they'll apply to all Celonis catalog processes. We'll update all your Celonis catalog items whenever a new version is available, and we won't show notification messages about new versions. You can access the Release Notes from the Objects and Events catalog page to see what's changed in each Celonis catalog version. For more on managing Celonis catalog updates, see Updating Celonis object types and event types.
If you’ve enabled object-centric process mining for all the data pools in your Celonis team (see Multiple object-centric data models), the object-centric data and data models in a data pool don’t interact with the case-centric data and data models. We verify when you load a data model that it’s either all object-centric or all case-centric, and we’ve added a warning message that you’ll see if we think the load request is mixing case-centric and object-centric configurations. You shouldn’t see this when you manage your object-centric data models through the Objects and Events user interface as expected.
When selecting the OAuth 2 (JWT Bearer with Private Key) authentication method when configuring your Extractor Builder, you can now select from the following signature algorithm: HS256, HS384, HS512 PS256, PS384, PS512 RS256, RS384, RS512 For more information about authentication methods when connecting to your source systems, see: Extractor builder authentication methods.
When you publish transformations for object-centric process mining, we create complete transformations, using your scripts as the core of them, to populate the tables in the Celonis database of objects and events. Previously, we set the data type for any attribute column containing a string to VARCHAR(255). Now, we can set the string length for each attribute column according to what’s in the source data that the transformation is handling - either the specified length for the column, or the database default length for VARCHAR columns. If you set a VARCHAR length in your original transformation script, for example using the CAST or LTRIM functions, we'll continue to use that.Optimizing the VARCHAR length means your transformations won’t fail due to overlength strings, and optimizes the performance of the Celonis database, which is very responsive to changes in string length. However, you should be aware that if the string columns in your source database are always or often set to a length greater than 255 characters, the performance of the Celonis database could potentially deteriorate rather than improve. This feature is currently in Limited Availability and will be released to all teams over the next few weeks. You’ll see the changes becoming effective after publishing in objects and events to change the generated transformations.
When you create event logs in the object-centric perspective editor, you can now break down event types into subsets using their attributes, meaning that you can give your analysis an extra level of granularity without creating multiple similar events. For example, you can select the delivery method attribute to create subtypes of the ReleaseDelivery event, so that you can analyze each specific delivery method separately. If you later add a new delivery method as an attribute value, you won't need to set up a new event for it - it'll just appear as a distinct path.You can select any number of attributes for any number of events in the event log, so you can create subsets of events based on combinations of attributes. We'll query the attributes in the order you place the events in the event log, followed by the order you place the attributes in the event type definition. If you need to change the ordering for different use cases, you can do that in the PQL query in Knowledge Models that use the event log.You can break down an existing custom event log by attributes, and still continue to use the event log as you do now. We'll create a new activity column for events and attributes ending in _ActivityDetails, and keep the existing column ending in _Activity alongside it for events only.With the new capability to add custom attributes to event types from the Celonis catalog (see Extend Celonis event types with attributes and relationships), you can use this feature with Celonis event types as well. Add the custom attributes to the Celonis event types, and create or edit custom event logs using them. Applications that don't know about the custom attributes will ignore them, so they won't affect your existing setup. For the instructions to model events, see Modeling objects and events.
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
OKSorry, our virus scanner detected that this file isn't safe to download.
OK