See what’s new at our product, check the updates below
You can now use knowledge models filters in Process Adherence Manager. You can: Apply filters defined in your knowledge model as preset filters in Process Adherence Manager. Use filters defined in your knowledge model when mining your model. For more information, see Filtering the process model.
In addition to configuring conditional container layouts using KPI lists, there's also a limited availability release of using buttons to change the active container tab.In this example, three buttons have been configured for Orders, Sales, and Stock. When a button is clicked, such as Stock, the corresponding tab is displayed for the user:For more information about configuring a clickable button to switch container tabs, see: Clickable buttons
You can now add multiple tabs to containers when creating your Views in Studio, allowing your app users to switch between content dynamically.In this example, the container has three tabs: Orders, Sales, and Stock. When the Orders tab header is clicked, the Orders information is displayed:You can also combined tab containers with a KPI list to create conditional layouts for your Views. This allows your app users to change the information displayed after clicking a KPI.In this example, a KPI list containing Orders, Sales, and Stock information has been configured. This is then linked to the tab container, allowing app users to click the Stock KPI and see the Stock information from the tab container.For more information about tab containers and conditional layouts, see: Containers / Conditional layouts
We've improved filter interactions between Process Adherence Manager and knowledge models. You can now: Create filters in PAM then save them as named filters in your knowledge model. Update filters in PAM that were initially created in PAM before being saved to the knowledge model. For more information, see Filtering the process model.
With a forthcoming release, you'll be able to write or generate SQL queries for JDBC data extractions using the new in-built Extractions Editor and AI Assistant. These enhanced features will allow you to use your source system specific SQL functions and JOINS, making it easier for you to extract your data into the Celonis Platform.In addition to generating extraction queries, the AI Assistant will also help validate queries for you. For further information, see: Extractions Editor and AI Assistant overview.Using the JDBC extractor?If you're currently connecting to your source system using the JDBC extractor, you must update the extractor to version 3.0.0 in order to use the Extractions Editor and AI Assistant.For more information about installing and updating the JDBC extractor to version 3.0.0, see: Updating the on-premise JDBC extractor.
We've made some changes to improve your experience of using Process Adherence Manager (PAM). Key changes are: We've renamed Alignment Explorer to Deviating variants so it's easier to understand the purpose of this functionality. We've renamed Breakdown of dimensions to Root cause analysis to align better with wider industry norms. When exploring deviations, you can now view the percentage of cases affected as well as the number of cases affected. You can now apply filters and perform root cause analyses at deviation detail level, reducing the number of clicks required.
You can now configure your pie and donut charts to display more than five slices, providing greater flexibility in data visualization.However, while this feature allows for additional slices, we continue to recommend limiting charts to five slices or fewer for optimal readability.To learn more about configuring pie and donut charts in Studio, see: Charts.
You can now schedule the retrying of your data jobs when configuring the connection to your source systems. Data job task within a schedule that fail are retried a number of times based on the policy you define. You can specify a maximum number of attempts for a run and a minimal interval between attempts.For more information about scheduling your data jobs, see: Scheduling the execution of data jobs.
You can now add recipients to your data job alerts, with the Celonis Platform sending email notifications to them whenever a data job runs and meets your configured conditions. These conditions include when a data job fails, runs successfully, is skipped, or takes an extended period of time. For more information about data job alerts, see Enabling data job alerts.
We've made some changes to how we parse the SQL for object-centric transformation scripts when you publish them. We'll validate more things when you publish, instead of when you run the transformations, which makes the errors easier to identify and fix or mitigate. These changes apply to both your custom transformation scripts and our supplied transformation scripts for Celonis catalog processes. A phased rollout of the changes to teams starts from now.As a result of these changes, when you publish your object-centric data model, you might see new validation errors that you weren't getting before. Here's what you might see, and how to fix or handle it: We'll now always add parentheses to expressions when we output the transformations. This might mean that an expression now fails to evaluate, or evaluates to a different result. To fix this error, in your custom scripts, follow best practice and include parentheses in SQL expressions where the order can be ambiguous. We'll now validate that each column data type supplied from your source system matches, or can be assigned to, the required data type for the attribute in our underlying database. The data types we use are Boolean, long, float, timestamp and string. You might see a new issue if a column in your source system data has a data type that our Celonis catalog transformations don't expect. To fix these errors, use the suggestions in Troubleshooting data extraction and pre-processing to account for the unexpected column data types. To handle these errors, activate the Skip missing data option for the data connection, as described in Skipping missing data for objects and events. We'll cast the data types to match the expected data types. Note that it's best to fix the errors as this option might introduce other unexpected issues. It's possible that the transformation might fail to run even with the Skip missing data option enabled if a data type can't be cast to the one required. If that's the case, you'll need to fix the data type.
What's changing?Starting from 27th October 2024, the default timezone of Action Flows scenarios changes from London Timezone (which is UTC+0 only in Winter) to Reykjavik, which operates on UTC+0 year-round and is unaffected by Daylight Saving Time. Why are we making this change?This change makes Action Flows scheduling consistent with other services in Celonis Platform running also on UTC+0. What does it mean for me? On 27th October 2024, the current London time zone will also shift to UTC+0, so you should not expect additional changes. On 30th March 2025, during the next daylight saving time change, there will be no time shifts for your Action Flows, as they will now be consistently scheduled to UTC +0. From now on, audit logs, execution history, and incomplete execution history for Action Flows will be based on UTC+0.
You can now configure your pie and donut charts to display more than five slices, providing greater flexibility in data visualization.However, that while this feature allows for additional slices, we continue to recommend limiting charts to five slices or fewer for optimal readability.To learn more about configuring pie and donut charts in Studio, see: Charts.
We've made some changes to how we parse the SQL for object-centric transformation scripts when you publish them. We'll validate more things when you publish, instead of when you run the transformations, which makes the errors easier to identify and fix or mitigate. These changes apply to both your custom transformation scripts and our supplied transformation scripts for Celonis catalog processes. A phased rollout of the changes to teams starts from now.As a result of these changes, when you publish your object-centric data model, you might see new validation errors that you weren't getting before. Here's what you might see, and how to fix or handle it: We'll now always add parentheses to expressions when we output the transformations. This might mean that an expression now fails to evaluate, or evaluates to a different result. To fix this error, in your custom scripts, follow best practice and include parentheses in SQL expressions where the order can be ambiguous. We'll now validate that each column data type supplied from your source system matches, or can be assigned to, the required data type for the attribute in our underlying database. The data types we use are Boolean, long, float, timestamp and string. You might see a new issue if a column in your source system data has a data type that our Celonis catalog transformations don't expect. To fix these errors, use the suggestions in Troubleshooting data extraction and pre-processing to account for the unexpected column data types. To handle these errors, activate the Skip missing data option for the data connection, as described in Skipping missing data for objects and events. We'll cast the data types to match the expected data types. Note that it's best to fix the errors as this option might introduce other unexpected issues. It's possible that the transformation might fail to run even with the Skip missing data option enabled if a data type can't be cast to the one required. If that's the case, you'll need to fix the data type.
Use AI-powered Process Copilots to interact with your data Process Copilots are a new AI-enhanced Studio asset that allow you to interact conversationally with your Celonis data. Each Process Copilot is configured with one of your Knowledge Models to help you analyze your data using predefined prompts or asking your own questions to generate a response in a variety of formats.Process Copilots are available in Public Preview. If you’re interested in getting access, please contact your account rep.Each Process Copilot will only have access to the data you choose from the selected Knowledge Model. You can create multiple Process Copilots to work with different Knowledge Models or to focus on different use cases within your data.Once configured, Process Copilots can be utilized by any Apps user to answer questions regarding your data, build custom graphs and tables, or look for improvement opportunities within your data. Users can interact with a Process Copilot through quickstart questions, template prompts, dropdown lists of suggested interactions or a free text field where they can ask their own questions. Admin users can also create their own KPIs or commonly asked questions that will be pinned to the start screen as a launching point for each new session.For more information, refer to Process Copilots.
Intelligent visibility into your deduction evaluation (Limited Availability) Our new Deductions Leakage app for Accounts Receivable enables your team to reduce revenue leakage from invalid deductions. The app compares your recoveries and historical write-off decisions against industry benchmarks, and provides monitoring and root cause analysis across three dashboards: The Recovery Monitor Dashboard, which classifies your closed deduction cases to show which resulted in recoveries, write-offs, or credits, and enables root cause analysis of write-offs and invalid deductions across different dimensions. The Open Deductions Dashboard, which provides actionable insights into your open deduction cases, intelligently flagging uncoded, at-risk, and small-value cases to support your operations. The Small Value Deductions Dashboard, which shows you the data-driven cost of deduction evaluation and gives you transparency on labor productivity losses and unnecessary write-offs. The app works on the Celonis catalog Accounts Receivable process for object-centric process mining, with a few custom additions. The app is in Limited Availability - if you are implementing object-centric process mining or plan to, and want to try the app out, talk to your Celonis point of contact. For the app documentation, see Deductions Leakage app - object-centric.
After installing an App from the Celonis Marketplace, such as the Universal Starter Kit, you can now delete the dependency between the package you installed the app in and the Celonis Marketplace itself. This dependency has no active impact on your App or content, so your App still functions as intended.To view and delete your package dependencies in Studio - click Package Settings - Dependencies: For more information about creating content using Studio, see: Studio.
Our prebuilt extraction packages for object-centric process mining are now available to download from the Marketplace. Find them in the new category "Object-centric extractors". We have extractors for SAP ECC, Oracle EBS, and (on request) Oracle Fusion, which is in beta status. You can download the extractions for separate Celonis catalog processes that you’ve enabled, or a joint extractor for all of the processes in our object-centric data model. For the instructions to get started with object-centric process mining using our prebuilt extractions, transformations, and object-centric data model, see Quickstart: Extract and transform your data into objects and events.
You can now manually export your Studio or App Views as a PDF, giving you shareable versions of the data outside of the Celonis Platform. This feature includes the ability to select multiple tabs, include page numbers, and choose the scale and orientation of your PDF.To export your View as a PDF while in view mode (in Studio or Apps), click Share - Export PDF: For more information, see: Exporting Views.
The Customer Consignment Stock app is now generally available in object-centric and case-centric versions. The app automatically surfaces overdue and excess consignment materials that have been sitting in your customers' warehouses for too long, or in quantities that don’t match historic consumption levels. You can use the Action View to proactively manage consignment orders, stock levels, and billing inefficiencies. The app takes into account each individual material movement, flags any quantity that exceeds its maximum threshold or is at risk of expiration, and enables you to take targeted action to reduce stock levels. For the object-centric app documentation, see Customer Consignment Stock app - object-centric, and for the case-centric app documentation, see Customer Consignment Stock app - case-centric.
What's changingOn 27th October 2024, we will be changing the default timezone of Action Flows scenarios from London Timezone (which is only in Winter on UTC +0) to Reykjavik, which operates on UTC+00 year-round and is unaffected by Daylight Saving Time. Why are we making this changeThis change will make Action Flows scheduling consistent with other services in Celonis Platform running also on UTC + 0. What does it mean for me? On 27th October 2024, the current London time zone will also shift to UTC+0, so you should not expect additional changes. If you have business-critical processes that require adherence to a specific time zone behavior before that date, we want to offer you our support in necessary configuration changes. On 30th March 2025, during the next daylight saving time change, there will be no time shifts for your Action Flows, as they will now be consistently scheduled to UTC +0. If you have business-critical processes that require adherence to specific time zone behavior before 27th October 2024, we ask you to review and make any necessary configuration changes on your end to avoid any unexpected results. Celonis Product team is offering support to you and your Value Engineers to support smooth transitions. Reach out if you need further clarification or assistance with this update.
You can now create and use enhanced variables in your Studio content. Enhanced variables allow you to centrally create and manage information that is referenced and reused across components and assets in Studio. They act as placeholders for information, either based on dynamically inserted context (such as company names, countries, and sales orders) or with manual input by the app user (such as entering the cost of an item).There are two types of enhanced variables: Enhanced View variables: These are specific to individual Views and can't be reused across Views in the same package. To learn how to create and manage enhanced View variables, see: Creating and managing enhanced View variables. Enhanced Knowledge Model variables: These can be used wherever the Knowledge Model is being used, as such can be reused across Views, Packages, and Spaces. To learn how to create and manage Knowledge Model variables, see: Creating and managing enhanced Knowledge Model variables. In addition, you can now view and manager your variable state while editing your View. The variable state represents the current value of the variable for the user. Initially, this is the default value, but it may change when the user interacts with the application. Existing Knowledge Model variables What were previously referred to as 'Knowledge Model variables' are now known as 'Legacy Knowledge Model variables'. These legacy Knowledge Model variables can only be used when creating legacy Studio Views. You can continue to create and manage your legacy Knowledge Model variables directly in the Knowledge Model: For more information about legacy Knowledge Model variables, see: Variables (legacy views only).
You can now convert one of your custom transformations into a transformation template. Choose a custom transformation from the list in the transformations editor. Either click it to view it and select Transformation actions > Convert to template, or just select Convert to template from the context menu (the three dots) in the listing. The original transformation becomes an instance of the new transformation template. You can rename, edit, and add to the template in the same way as a transformation template you created from scratch. For more on creating and managing transformation templates, see Creating transformation templates.
You no longer have to connect all the object types in a perspective together. Previously, we didn't allow objects in a saved perspective if there wasn't a path of relationships between them and all the other objects, with the exceptions of the CurrencyConversion and QuantityConversion helper objects, and the master data object MaterialMasterPlant. Now, you can have a perspective that includes standalone objects, and distinct groups of objects that are connected to each other but not to other groups. When you save it, we'll give you a warning message to let you know there are object types that are not interconnected, but you can still save and use the perspective. For the instructions to create custom perspectives, see Creating custom perspectives and event logs.Allowing standalone objects and distinct groups means you can save a partly finished perspective to work more on later. It also means you can include standalone helper objects that are not in the Celonis catalog, such as a factory calendar table, workday or weekday calendar, and alternative quantity or currency conversion tables. You can set any object as the lead object in event logs, including the default event log, and you can include standalone or grouped objects in an extension to a Celonis catalog perspective.A main reason that we disallowed standalone objects and distinct groups previously was that with a single data pool for objects and events, you could only restrict data access through setting data permissions on a perspective. A disconnected object in the data model was a risk because it would not be subject to the same rules as the connected objects. This is still the case, but now if you require strict control of end users' access to data, you can use multiple data pools for objects and events. Give users access to a data pool where only the permitted data is shared with the object-centric data model. If you prefer to use a single data pool, and you are setting data permissions for a perspective that contains any standalone objects or distinct groups of objects, check your data permissions carefully. For more on this, see Data permissions for object-centric process mining.
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
OKSorry, our virus scanner detected that this file isn't safe to download.
OK