Skip to main content

I would like to create a role that only has the rights to manage the permissions of data pools, without the person being able to edit the data pool itself.

I have already implemented the same with the Spaces, where the role can only distribute permissions but cannot see the content of the packages. 

 

Unfortunately, Data Integration does not have the option "Manage Permissions" for a Data Pool separately, while the option "Edit Data Pool" includes the right ton manage permissions. With the "Edit Data Pool" option, the user can also edit the entire data pool. How can we devide the admin right to create data pools and to manage the permissions of a data pool, which needs to be granted to another role? 

 

I look forward for some support.

 

Best regards,

 

Tim

Hi Tim,

 

If my understanding is correct you want to give view access to the data pool for a certain group or user. If thats the case then I would first make sure that the user or the User Group is given with "Analyst Role" since if you give Member then they wont be any to see the data integration.

secondly, you can select the "View all data pool" under the Settings -> Permissions and select data integration from Services. This will restrict the user or user group to jsut view the data pool.

if you want to give the same access to a specific data pool you can directly go to that data pool and select the extraction you want to give access and in the permissions select the view all data pool option.

 

image 

 


Hi Tim,

 

I think this use case is currently not covered by Data Pool permissions.

Managing Data Pool permissions is only possible by having permission "MANAGE DATA POOL" (Edit Data Pool is not covering this). But with Manage Data Pool the person is able to do everything within the Data Pool.

 

You should try to create a Feature Request and ask Celonis to implement more granular possibilities when it comes to Data Pool permissions.

 

BR

Dennis


Hi @tim.sande,

 

While your use case doesn't exist out of the box currently, you could bypass this by creating two data pools and share the data between those two.

 

The data is extracted in pool one, where also the transformation can run.

Once everything is in place the data can be shared between the pools:

 

In the second schema, no extractions and transformations are needed anymore. So anyone with edit access cannot destroy the pipeline (unless new delete statements are added 😉 )


Hi @tim.sande,

 

While your use case doesn't exist out of the box currently, you could bypass this by creating two data pools and share the data between those two.

 

The data is extracted in pool one, where also the transformation can run.

Once everything is in place the data can be shared between the pools:

 

In the second schema, no extractions and transformations are needed anymore. So anyone with edit access cannot destroy the pipeline (unless new delete statements are added 😉 )

Thank you for this approach, i will try this.

 

BR

Tim


Reply