Disclaimer: I do not encourage anyone on using this feature. And if you use it, please be careful with manipulating data using this feature.

With the release of the PEAP version for 10.0.25 Microsoft reintroduced the ability to execute jobs without having a downtime for the needed deployment:
While I wouldn’t recommend anyone to use this, I decided to show you a small overview of the current state of the feature.

Enable the flighting

So the feature is shipped as a flighting feature which we need to enable: AppConsistencyCustomScriptFlight
Easiest way is to use the d365fo.tools to enable the flight: d365fo.tools/Enable-D365Flight.md at development · d365collaborative/d365fo.tools · GitHub
Remember to restart the IIS after enabling it.

When everything succeeded you should be able to see a new form under the System Administration – Periodic tasks – Database – Custom scripts:

Prepare the job and upload the deployable package

The job is just a runnable class which needs to be the only element in the containing model. Here you can have look on the sample code:

Nothing fancy there, just a simple update. I observed, that updating the SalesTable or another table which is using the cache can not be updated right now.
Next step would be to create a deployable package and to upload it using the upload button in the custom scripts form giving the script a meaningful name:

Approve, test and run the script

Now we need to approve an test the script. These steps can’t be executed by the same user who uploaded the script. So you need a co-conspirator for your update job from hell.
In the details form you can execute the desired tasks:

When you test the script the log in the form will be filled and if the script fails, the script remains in a failed state and can not be run again.

The sample script ran through without any issues. So we are good to go and execute the script. When the script has finished we can check the updated record in the table browser:

So basically it’s not that much of an effort. But be aware, that you really need to know which data you’re updating and that you possibly could be destroying the production database.