Forum Replies Created
25th August 2020 at 3:14 pm #34289
1. You can define multiple template versions from the edit templates dialog. Once you define a dual version of a template, you can change the executable of the Maya batch as well as declare different environmental variable.
How do you exactly specify a maya profile folder at the moment ?
Using this technique does not require you to swap pools
2. You can filter the view by pools but there’s no column that resume the pool considering an host may belong to multiple pools this may became quite long and useless9th July 2020 at 9:38 am #34273
Here is a template modified to enable either generation of cache, and the recovery option. This will included in the next service release.
Enjoy16th May 2020 at 10:02 pm #33469
No this is a no sense, muster spawns any process assuming we are talking about windows, using the CreateProcess() API, or using a fork() call on Unix systems. That means this is the same exact way of creating a process through a console.
Now, what can in the end change between a console and Muster, is the context where Muster is running. If you’re running Muster as a system service, you may have environmental variables, and environments in general inherited from the user account you actually assigned to the service (or if you run it as a local system account, that’s strongly discouraged, a more weird environment).
So this is your first test: stop the Muster renderclient service from the system service control panel, and launch the renderclient.exe manually, this will spawn Muster inside a shell. Launch your job and check what happens.
Also if you get back with your actual installation configuration, we may understand better this weird issue.
Cheers !17th April 2020 at 8:51 pm #32089
make the same you are doing into onBuildEnvironment by reading the vars from a file, but do it it on onGetApplicationPath and onGetApplicationStartingFolder. There you can switch the exe depending on your environment and job data. Also check the onBuildEnvironment, you may need to prepare your environment too according.
Cheers!17th April 2020 at 2:32 pm #32075
The base implementation of multi versioning is very simple. You just define a new version in the template editor dialog, and from that point, you get multiple selectors on the clients for the executable path. Our implementation ends there, meaning that Muster just points to a different executable depending on the version you select.
Now, starting from this setup, you can change stuff depending on your pipeline. The effective executable is stored in the client preferences, but it’s always loaded from the template and nothing can prevent you to read an environmental variable in the job or somewhere else, and pass back a different executable.
If you already use an env var, I may think that you launch Nuke though a bat or script file, in that case, it may be better to just continue to launch your script file with a custom template.
If you can go deeper explaining your implementation, I can guide you on the best path.11th February 2020 at 8:57 pm #28966
Sorry I was not clear. If neither a GPU mask, neither settings in templates configuration specify a number of GPUs per process, they are allocated automatically dividing the available GPUS / instances. If you spawn 4 instances you get 1 GPU per instance.
So the phrase should have been : When you want to send light jobs with 4 GPUS, Muster will split them…
When you want to send medium jobs, set borrow instances to 1 and Muster will set 2 GPUS per instance…
heavy jobs , borrow to 3 and Muster will set 4 GPUs per instance..11th February 2020 at 8:42 pm #28963
Okay let’s make that simple. Disable the GPU mask and spawn 4 instances.
– When you want to send light jobs, set number of GPUS to 4 , Muster will split them.
– When you want to send medium jobs, set number of GPUS to 2 and borrow instances to 1 , that means the jobs will be sent to 2 instances only and each instance will borrow(lock) another one, each rendering instances will grab one additional GPU
– When you want to send heaby jobs, set number of GPUS to 4 and borrow instances to 3 , and the job will go on the entire GPUs set11th February 2020 at 7:18 pm #28957
A few point to clarify here:
1) Deadline workers are the same of Muster instances. While workers runs in separate threads due to their internal coding, we run instances under the same process with different connections. This gives in our opinion better handling of pools with different instances. In the end both options spawns multiple command lines so there’s no difference in your scenario
2) To split GPUs in Redshift with multi instancing, you have two options: the first one is to not set the GPU affinity mask and set the number of GPUs per instances to 1 , in that way, Muster will automatically change command line to use one GPU per instance. Using an instance mask fits better in combination with the borrow instance feature, this is a more complex topic where you can have 4 instances but you want to send a job using only two instances and lock the others. In that way you can assign what GPU gets instance 1 , and what gets instance 2 , and lock 3/4. By the way your setup works good even in that way.
About the crash, it may be RAM or GPU overhead. If you dig with the borrow instance option, you can send light jobs to 4 instances and heavy jobs to 2 instances with 2GPUs each one.
Hope this helps!27th January 2020 at 7:59 am #28222
Yes it seems to work on most circumstances but as I said it is effectively a bug and we already patched it to be sent to port 40000 under any circumstances at the moment. For the next service release, the port will be configurable because many systems distinguish between 7 and 9. On what version are you actually ?
We can send you an hot fix that at the moment is assured to send the WOL of port 40000. In the meanwhile we are going to check if there’s any difference when the WOL is sent by rules but I suspect there’s no difference.
Are you on V9 LTS ?24th January 2020 at 10:28 am #28081
After inspecting some stuff, the manual sent WOL packet is sent on port 40000, that is the default WOL port by specs, while the automated Wakeup is sent to a random port, while this works in most circumstances because the WOL just need to have the headers regardless of the port where is sent, some machine/OS distinguish between port 40000, 7 and 9. We are packing this configurable for the next release.