Nuke 4.6 template problem

Viewing 4 posts - 1 through 4 (of 4 total)
  • 20th June 2007 at 10:51 am #14193

    Hello,

    I am working on a Nuke 4.6 template, and I have this pb: I have done a template which work fine if I submit small packet (like 2 or 4) but if I submit a more bigger packet(like 10), I got the following error: job start timeout, how can i solve this problem?

    Thank you

    Cédric

    21st June 2007 at 12:15 pm #14694

    This happens cause the job goes in timeout. Small packets are rendering because the timeout doesn’t happen before the rendering completation and Muster still process the render as successfully, but unfortunately , the template is configured badly.

    Check the following section of the template you’re writing:

    SET DETECTION_LOGIC CHILDPROC
    SET DETECTION_LOGIC_PROC “PROCESS NAME”

    This is where you tell Muster how to hook to the spawned child process. If the nuke executable batch render is a SINGLE process, in example, you spawn a MYRENDER.EXE -MYPARAMS and the Task Manager reports a MYRENDER.EXE allocating memory and working, you should modify the lines to:

    SET DETECTION_LOGIC DIRECTPROC
    SET DETECTION_LOGIC_PROC “MYRENDER.EXE”

    If the batch render spawns another process, as it happens on Maya where the Render.exe command spawns the mayabatch.exe command, you should modify it to:

    SET DETECTION_LOGIC CHILDPROC
    SET DETECTION_LOGIC_PROC “Mayabatch.exe”

    Off course, Mayabatch and myrender.exe are only placeholders, you should check inside the task manager, how the processes are called.

    Also, if you post your template, I could give it a further look.

    Regards.

    22nd June 2007 at 11:20 am #14692

    Thank you, Leonardo,
    I will try it.
    When my template for nuke, will be finish, I will post it.
    Regards

    Cédric

    23rd January 2008 at 7:53 pm #14788

    Just want to inform that there’s a built-in nuke template included in the latest releases.

Viewing 4 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic.