- This topic has 1 reply, 2 voices, and was last updated 2 weeks, 2 days ago by
September 9, 2020 at 3:37 am #255437ParticipantTopics: 5Replies: 4Points: 32Rank: Member
I have the following design question and I would like to get some input.
We have a long complex data handling process by our company.
The process can not run in one sequence and in one go.
There are different steps that are spreading between 8pm and 8am next day.
Basically, at every execution, it just needs to run one of the steps but it relays on the successful execution from the previous step.
Just to describe briefly the process
The process starts at 8pm every day, waiting for some files to come into a ftp.
So the step1 needs to be executed at 8pm and just check if the file is there if not send notification.
Step2 needs to be executed between 9pm and 9:30 becasue it relays in the data from previous step and some additional data which is coming up in this time frame.
Then there are 5-6 more steps with similar logic till 7am morning next day. Each on his on time frame
My goal is to use just one script and one scheduled task to handle the process.
At this moment how I addressed is by splitting the script in different steps and each one having his own “ExecutionRunTimeFrame”.
So I create a condition, if the script is executed between 8:00 and 8:30 then it can only run the first step from the script.
With just 1 schedule tasks and multiple time triggers, based on the time of the execution it will just run the step which is defined in this exeuctionRunTimesFrame.
The solution works but I am wondering if someone has a different view of how this can be handled.
Thanks for your feedback,
September 9, 2020 at 9:44 am #255479ParticipantTopics: 0Replies: 81Points: 362Rank: Contributor
I would have multiple scripts. I would have one script that contains all the functions that you can dot source easily from another script. Then I would have one “processing” script per scheduled time (assuming each scheduled time actually does something different). I’ve have those processing scripts call the necessary functions with the required parameters. If the parameters and values are fixed, you do not need to create parameters for any of your scripts. If the parameters need to vary, then your processing script files could have parameters that feed into the function calls inside the file.
The good thing about this method is if anything needs to change about a specific scheduled script, you can either edit the scheduled task (for the timing) or the one script file for the data. It will be easy to locate the target file and change it without impacting the rest of the other schedules. If you need to change any functionality, you can go to the one function file and make your changes.
There’s also the path of creating a module in place of the functions file. Then have all of the processing scripts import the module.
Keeping everything in one file may or may not be more convenient because it depends on what you consider convenient. My personal experience is that the larger the script, the harder it is to troubleshoot issues or even make changes. I think the exception is if you document/comment really well. However, keeping it in one script will require adding additional logic to ensure only the target areas of code are executed on each schedule.
- You must be logged in to reply to this topic.