Forum Replies Created
-
-
16th May 2020 at 10:02 pm #33469
Hi Sergey,
No this is a no sense, muster spawns any process assuming we are talking about windows, using the CreateProcess() API, or using a fork() call on Unix systems. That means this is the same exact way of creating a process through a console.
Now, what can in the end change between a console and Muster, is the context where Muster is running. If you’re running Muster as a system service, you may have environmental variables, and environments in general inherited from the user account you actually assigned to the service (or if you run it as a local system account, that’s strongly discouraged, a more weird environment).
So this is your first test: stop the Muster renderclient service from the system service control panel, and launch the renderclient.exe manually, this will spawn Muster inside a shell. Launch your job and check what happens.Also if you get back with your actual installation configuration, we may understand better this weird issue.
Cheers !
17th April 2020 at 8:51 pm #32089make the same you are doing into onBuildEnvironment by reading the vars from a file, but do it it on onGetApplicationPath and onGetApplicationStartingFolder. There you can switch the exe depending on your environment and job data. Also check the onBuildEnvironment, you may need to prepare your environment too according.
Cheers!
17th April 2020 at 2:32 pm #32075Hi Alex,
The base implementation of multi versioning is very simple. You just define a new version in the template editor dialog, and from that point, you get multiple selectors on the clients for the executable path. Our implementation ends there, meaning that Muster just points to a different executable depending on the version you select.
Now, starting from this setup, you can change stuff depending on your pipeline. The effective executable is stored in the client preferences, but it’s always loaded from the template and nothing can prevent you to read an environmental variable in the job or somewhere else, and pass back a different executable.
If you already use an env var, I may think that you launch Nuke though a bat or script file, in that case, it may be better to just continue to launch your script file with a custom template.
If you can go deeper explaining your implementation, I can guide you on the best path.
11th February 2020 at 8:59 pm #28967no, minimum GPUs settings is a global filter. If you want to send jobs only to machines that got 4 cards, you set it to 4.
11th February 2020 at 8:57 pm #28966Sorry I was not clear. If neither a GPU mask, neither settings in templates configuration specify a number of GPUs per process, they are allocated automatically dividing the available GPUS / instances. If you spawn 4 instances you get 1 GPU per instance.
So the phrase should have been : When you want to send light jobs with 4 GPUS, Muster will split them…
When you want to send medium jobs, set borrow instances to 1 and Muster will set 2 GPUS per instance…
heavy jobs , borrow to 3 and Muster will set 4 GPUs per instance..11th February 2020 at 8:42 pm #28963Okay let’s make that simple. Disable the GPU mask and spawn 4 instances.
– When you want to send light jobs, set number of GPUS to 4 , Muster will split them.
– When you want to send medium jobs, set number of GPUS to 2 and borrow instances to 1 , that means the jobs will be sent to 2 instances only and each instance will borrow(lock) another one, each rendering instances will grab one additional GPU
– When you want to send heaby jobs, set number of GPUS to 4 and borrow instances to 3 , and the job will go on the entire GPUs set
11th February 2020 at 7:18 pm #28957Hi Alex,
A few point to clarify here:
1) Deadline workers are the same of Muster instances. While workers runs in separate threads due to their internal coding, we run instances under the same process with different connections. This gives in our opinion better handling of pools with different instances. In the end both options spawns multiple command lines so there’s no difference in your scenario
2) To split GPUs in Redshift with multi instancing, you have two options: the first one is to not set the GPU affinity mask and set the number of GPUs per instances to 1 , in that way, Muster will automatically change command line to use one GPU per instance. Using an instance mask fits better in combination with the borrow instance feature, this is a more complex topic where you can have 4 instances but you want to send a job using only two instances and lock the others. In that way you can assign what GPU gets instance 1 , and what gets instance 2 , and lock 3/4. By the way your setup works good even in that way.
About the crash, it may be RAM or GPU overhead. If you dig with the borrow instance option, you can send light jobs to 4 instances and heavy jobs to 2 instances with 2GPUs each one.
Hope this helps!
4th February 2020 at 12:49 pm #28611Hi Alex,
Your suggestions have been included in version 9.0.14-11541
Cheers
27th January 2020 at 9:30 am #28228Do not try 11518, it has the same problem, get in touch with me with a support ticket and I’ll send you a link to an hot fix. Please specify if you’re running Windows or Linux
27th January 2020 at 7:59 am #28222Yes it seems to work on most circumstances but as I said it is effectively a bug and we already patched it to be sent to port 40000 under any circumstances at the moment. For the next service release, the port will be configurable because many systems distinguish between 7 and 9. On what version are you actually ?
We can send you an hot fix that at the moment is assured to send the WOL of port 40000. In the meanwhile we are going to check if there’s any difference when the WOL is sent by rules but I suspect there’s no difference.
Are you on V9 LTS ?25th January 2020 at 3:45 pm #28143Muster 8 does not support capturing progresses in a single task job. This is supported in Muster 9.
Regards.
24th January 2020 at 10:28 am #28081After inspecting some stuff, the manual sent WOL packet is sent on port 40000, that is the default WOL port by specs, while the automated Wakeup is sent to a random port, while this works in most circumstances because the WOL just need to have the headers regardless of the port where is sent, some machine/OS distinguish between port 40000, 7 and 9. We are packing this configurable for the next release.
Thanks.
17th January 2020 at 12:00 pm #27749Hi there,
If you get a wakeup in the log, that means the message is effectively sent. Consider that the wakeup you send manually is sent from Console, not the Dispatcher, so be sure that the dispatcher UDP connections can reach your clients
17th January 2020 at 11:59 am #27748complete job start/end frames are inside the job attributes =>
job.attributeGetInt(“start_frame”) ….
right click on a job in Muster console and select Inspect attributes, you’ll get a full list of what can be gained.
15th November 2019 at 11:18 am #24785Hi Alex,
We logged your suggestion, we’ll see if it can fit a service release for 9.0.14
Have a nice day!
-