Archive for the 'Announcements' Category

Sun, Feb 23rd, 2020
posted by jjburton 11:02 PM

We’ve moved most of our work to the CG Monastery. Sorry for not pointing to there sooner. Been busy 🙂

Tue, Apr 10th, 2018
posted by jjburton 07:04 PM

First new rig in way too long.

This release is about getting out our first proxy rig and new character for the community to use. As you’ll see, this first release is limited geo in what we’ll be calling a proxy rig. The final release version of this little guy will be more of what you’d see here:

This rigging system is what the Morpheus 2 tech has turned into and we’re excited to push it forward. When it’s a bit further along we’ll be re-rigging Morpheus 2’s base asset with it and release that as well.

We’ll have more on our plans with the alpha 2 release hopefully in a few weeks. We have some gigs that we have to do as well and one of our partners welcomed his first son into the world last month and so he’s a little busy at present.

For now, here’s alpha 1!

{ Get it here. }



  • First release using the new Morpheus Rig System (name subject to change)
  • Unity targeted rig – lower joint count and other considerations
  • Most major rig features in place
    • Space/follow setups with space pivots where sensible
    • Puppet and part object sets
    • Fk/IK
    • Multi banking foot with direct controls
    • Direct joint controls
    • Framework for a ton of stuff for the next push
    • Twist segments and handles
    • Look at head setup
  • Proxy geo
  • Proxy shapes for direct controls where possible

Next Sprint

  • UI
    • Animator UI first pass
    • Refine Marking Menu
    • Builder refinement
  • Rig Features
    • Grouping wiring
    • FK/IK switching system
    • Scale enabled on rig where it makes sense
    • Squash Stretch in segments and rig
  • Builder
    • Hand setup helper
    • Better foot shape


Mon, Dec 11th, 2017
posted by jjburton 03:12 PM

It’s been a busy year though updates on the site don’t reflect that very well. Turns out we were targeted by some not nice folks that kept our site locked down a lot over the last year and generally wasted a lot of our web guru’s time. We think we have that ironed out so we should start posting things here again more for more detailed stuff than what is on Facebook.

We started out 2017 with a the goal being a year to get things done. Our goal to not over promise but to let our work stand for itself. As such here’s what we have to show for the year (all the external projects are under NDA).

Getting back to it

Josh has gotten back to taking gigs and has been working jobs since the summer and had a great time doing a facial rigging class with Rigging Dojo and plans to do more with them next year. David and Josh both have continued to work on tools and worked some jobs together.


We set up a Sphinx doc system and have been fleshing it out over the year as we’ve updated tools. We feel this is a great foundation on which to continue to build and provide better support for the large assortment of tools we are continuing to develop.

New tools and bits in 2017

We started this year with a 2.0 rewrite of much of our core code base as well as updating some old tools and doing new ones. We’ve made a lot of progress on this front just this year.

  • Toolbox 2.0 – ( ) We redesigned the toolbox to be accessible by a top maya menu, marking menu and a ui. There are loads of functionality to be found and it continually updated
  • Locinator 2.0 – ( ) Took a stab at updating this with new tech developed and expanding features. This is a great tool for animators that need to track different things for short periods of time without finicky constraint setups. It’s one of our more popular tools.
  • cgmSnap 1.0 – ( ) We spent a lot of time working out snapping things around as we’ve been working on our rigger and for jobs in general. First attempt at trying to expose those calls in a more useful format.
  • cgmJointTools 1.0 – ( ) Having been huge fans for comet’s tool for years. There were a few things we wanted to add for our own use. Some of those key features being chain/curve splitting, planar orientation and more.
  • Transform Tools 1.0 – ( ) Built from an idea Bokser had to have values more easily set both absolutely and relatively.
  • Set Tools 2.0 – ( ) Rewrite of an old tool for working with object sets in maya to make managing them easier.
  • Marking Menu 2.0 – ( ) Taking the ideas from Morpheus 2.0’s marking menu work and expanding on that to a unified menu with different modes for rigging, animating and more.
  • cgmDynParentTool 1.0 – ( ) This tool allows you to easily setup point,point/orient and orient dynamic groups for controls for rigs as well as providing the tools to switch modes on the fly when animating. We use this all the time for rigging work.
  • AttrTools 2.0 – ( ) Another stab at attribute work to make working with maya attribute more user friendly.

To get started –

For the next post, we’ll let you know a bit more about what our plans are for 2018. Thanks !

Wed, Jan 18th, 2017
posted by jjburton 08:01 PM

The problem: storing a dag node component in a way that makes it easily callable and persistent.

As I’ve been both refactoring/optimizing our core libraries as well as updating locinator I came across this old issue. There are several ways of doing this, some better than others. Just been wrapping up rewrite of our attribute function library. A part of that was rolling out our msgList concept from cgmMeta to being outside meta as well as expanding on that with datList(more on that another day).

Short version

If you don’t care about the details and just wanna see code, grab the last master branch build of our tools and you can find the main functions here:

  • cgm.core.lib.attribute_utils.set_message/get_message
  • Walkthrough example of datList/msgList with new stuff —
  • Note — There may be a lot of script editor activity on the example stuff as I have DEBUG on in the module currently.

Long version

Let’s say we wanna store an object ‘null1’ to call and we’re storing on ‘storageNull’ How might we do that.

  • string attr – example: storageNull.stringAttr = null1
    • This works as long as there is only one object named ‘null1’ and as long as ‘null1’ is never renamed. So in short, it works rather poorly.
  • msgAttr – example storageNull.msgAttr >>connection>> null.msg
    • This works great and was my preferred method up to this point.

The conundrum on locinator was that I had some locator types that were created from a component say ‘geo.vtx[123]’ for example. My solution back in 2010ish when I wrote it was to just use a string for the whole thing and just hope there wasn’t a name conflict.

So, how might we store this in a persistent manner. Having learned a few things since back in twenty ought ten I said self, we can can better than that now.

The new implementation is as follows:

  1. We take our data to be stored and split out our base node from any component or attribute. Namely we split the first ‘.’ out and validate the bits to know what we have
  2. Store the main node as a standard message connection
  3. Store the extra bits to a json dict via Red9’s json string implementation. We also allow for a a specified dataAttr (our extra data attr) and dataKey (for the dict) for specific storage

So in this case our ‘geo.vtx[123]’ is split to the following:

  • storageNull.msgAttr >>connection>> geo.msg
  • sorageNull.dataAttr = {msgAttr/dataKey:vtx[123]}

We do this as a dict and not a simple string attr per stored object because we use lots of these and having two attrs for every stored message seemed overkill. Once I’d worked out the component store, attribute storing was pretty simple. If we wanted to also add ‘geo2.tx’, it would be added as:

  • storageNull.msgAttr2 >>connection>> geo2.msg
  • sorageNull.dataAttr = {msgAttr2/dataKey2:vtx[71], msgAttr/dataKey:tx}

The dataKey comes in particular use with our datList/msgList setup which is our solution to multi message attrs being rubbish for maintaining ordered data.

When the get_message call happens it first gets the msgAttr and then checks the default extra dat attr if none is specified. Whenever data is found it gets appended to the return.

Yes, you can do some of this stuff with objectSets or other avenues and sometimes those work great. This
is simply another way of storing data mainly for our rigging purposes.

Still refining this but happy so far. Thus ends this post.


Sun, Jan 1st, 2017
posted by jjburton 11:01 PM

We have some big plans this year. Plans to get moving on taking gigs and also delivering on some long overdue promises.

Over the last few years, we’ve been doing a ton of r&d with the Morpheus Rig 2.0 project and it’s time to refine that work into something usable both for us and our users. We started some of that this last fall with meshTools but there’s a long way to go.

  • New Marking menu
    • This is will be at the center of our new rigs and systems. Many of the concepts and ideas were fleshed out during the Morpheus project and this is a major elaboration of that effort. You can see the frame work for that here. As a short window there are currently two modes:
      • TD — This is a replacement for our old tdTools. Having more stuff at a single button press proved very helpful with the Morpheus marking menu and it made sense to expand on that. This provides access to:
        • Raycasting
        • Snapping
        • Contextual tools
        • Locinator (currently rewriting)
        • A myriad of utilities and much more
      • Anim — This is just like the old anim marking menu plus a few new features.
      • Eventually there will be a Puppet mode similar to what our users were testing for Morpheus 2.0
  • Core Rewrite
    • This work began in November 2016 and is ongoing. We’ve been bringing to our cgm.core those functions and modules that our necessary for our next steps and will eventually cull out the old cgm.lib.
  • Morpheus Rigging System
    • In order to take jobs again in the time windows we have, we will be pushing our rigger to completion and along with that delivering at least a rig or two to the community. This involves a bit of re-imagining of the some concepts but feel this is the best way to go to get our users and backers the most functional setup we can deliver. I’m not gonna flesh out all of our ideas here as having failed on delivering what I’d hoped for Morpheus 2.0 initially there is a rather understandable gaping canyon of trust for deliving. When it’s done, you’ll see it. Those that are involved on either our cgmTools or Morpheus slack channels will hopefully help test and push things.If anyone wants to join those, message us here or on facebook.
    • Rigs
      • Biped base Morpheus
      • Some sort of quad rig to push some other modules through the rigger.
  • Internal Project
    • We’ve had an internal content project on hold for way too long and we plan on getting that rolling this year



Fri, Dec 23rd, 2016
posted by jjburton 09:12 PM

First of all, what is ray casting? Ray casting in maya is when one of several api functions is called which when given a vector, start point and shapes to hit – returns points of intersection.

Turns out you can use that information for all kinds of things. For several years now, we’ve been using it to place follicles, cast curves and shapes on other meshes and other functions. A few months ago, I took a quick pass at adding a snap to function to our implementation where a user selects objects to snap, activates the tool and then casts a ray in scene to get a point in space to snap to. It worked but penetrations were rampant and I planned on revisiting it when I had some time.

Recently I found I had small chunks of time and this was one of the things that seemed useful to use one of those chunks for.

The solution we ended up with is as follows:

  1. Objects are selected
  2. The tool is activated
  3. The user left clicks the screen to cast a ray given the options they’ve provided via the marking menu
  4. A locator is generated and continuously updated while the key is held down
  5. When the left click is released, the snap targets:
    1. Cast another ray either along their ‘down’ axis or casting back to the hit point depending or orient mode
    2. The first mesh hit is assumed to be the driven shape of the control or object and provides the offset distance to use
    3. The targets are snapped to a new point in space from the hit point out along the normal of the mesh or nurbs surface of that hit the offset distance detected or provided via the marking menu for fixed amount
    4. The objects are oriented (if required

The core of our functionality for this work on this pass is found:

  • cgm.core.lib.rayCaster — I simplified our call to a more generic rayCaster.cast rather than breaking down multi hit and other modes via separate calls. Also added normal returns from hit points as it was necessary for the offseting
  • cgm.core.classes.DraggerContextFactory.clickMesh — oh so much…
    • Added offsetting
    • Cast plane mode. Can create objects on a function generated cast plane of x,y,z
    • vectorLine — new create type for visualizing vectors and normals
    • data — new create type to just get data
    • object axis args — for orient stuff
    • Duplication — Selected objects are duplicated and snapped with each left click until the tool is dropped.
  • cgm.core.lib.math_utils.get_vector_of_two_points — Self evident.
  • cgm.core.lib.distance_utils.get_pos_by_vec_dist —  Get a point along a ray given a point, ray and distance along that ray

Lessons learned:

  • Not 100% satisfied on current orient mode and I think Bokser may take a stab at that
  • Maybe I was the only one still using it but zoo’s baseMel UI has some serious slowdown in 2016. Normal mc. calls are much much faster. I’m culling out our usage as I can for speedier ui’s.
  • Initially I was using a vector from the hit point to the snap object as the offset vector but it proved to be inconsistent – For example, if you cast to a far side of a mesh with a ‘far’ cast, the offset put it inside the shape that was hit. Ended up finding the normal of the mesh/nurbs shape hit point to be a much better offset vector to use.
  • There are some issues with Maya api 2.0 folks should be aware of if you should want to mess with this stuff yourselves. These were all found to be True in Maya 2016.
    • meshFn.allIntersections — When casting at poly edges, 2.0 fails. 1.0 does not
    • surfaceFn.intersect — Nurbs surface UV returns a different rawUV than 1.0’s. 1.0’s normalizes as expected, 2.0’s does not
    • surfaceFn.normal — Nurbs surface normal return is junk and broken with 2.0. 1.0’s is just fine.

More on all of this, a vid or two and a new tool to play with in a few weeks.

Wed, Sep 21st, 2016
posted by jjburton 11:09 AM

More vids on specific tool pages. See links inline

Sometimes you just gotta ship something.

For a LONG time now, I’ve been struggling to get Morpheus 2 where I wanted it. Having a small window to get something done because of personal stuff, I wanted to get something done. It’s also been way too long since we’ve released a ‘solid’ tool build so wanted to do that here.

I’ll keep this post updated with new builds as they become more stable until the next major release.

  • Build – 09.22.2016
    • Path fixes that may have been causing some folks issues
    • Soft selection evaluation base functions in
    •  math
      • Most math functions now work with soft select evaluation
      • Added Reset to Targets to Base section
      • Added CopyTo to Target Math section
  • Build – 09.21.2016

So, I made a new tool encompassing a chunk of the tech from Morpheus 2’s development into a manner that is more user friendly. An overview of some of the tech added:

  • Versions — Things should be working from Maya 2011 – 2017
    • 2017
      • Worked on resolving a host of issues. From gui hard crashing to zoo.path stuff mentioned in a blog post last month.
  • Help — cgmTools>Help
    • Added Report issue — link to bit bucket report issue form. Please use this to re port issues.
    • Get Builds — link to page to download wip builds
  • cgmMeshTools — cgmTools>rigging>cgm.meshTools.
    •  MeshMath
      • Symmetry evaluation implemented
      • Base to targed functions/selections
      • Lots of math functions for working with mesh targets – normally blendshape work.
    • Ray Casting
      • ClickMesh
        • Added Nurbs Support
        • Added Snap support – Select targets, activate and snap stuff to any geo you have loaded as targets or in the scene. This is something I wanted to do way back when I first started playing with rayCasting and I’m happy to check that box
        • Follicles on nurbs now work
      • Curve Slice — Lathe curves from objects within mesh objects
      • Curve Wrapping — More advanced curve lathing
      • Implemented multi surface casting to most functions
    • Utils
      • Proximity Mesh/Query — Create proximity mesh or selections from one mesh to another
  • Snap Making Menu — cgmTools>Hotkeys>Snap Tools
    • Added the rayCasting snap
  • cgmMeta
    • A lot of the optimization from last month is in the build.
  • Web documentation
    • Check the side bar here to find the new tool sections (meshTools, cgmMMSnap)
  • cgmHotkeyer
    • Back with Maya 2016, zoo’s hotkey setup no longer worked because of Maya changes. We wrote our own and all hotkey setup uses that now.
  • Other stuff – as the last post released build was years ago, there is a HUGE amount of tools and functions implemented.



Mon, Aug 29th, 2016
posted by jjburton 02:08 PM


We’re pleased to announce our first on demand class with Rigging Dojo – Intro to Metadata. This is our first class of this type in general and we hope folks find it helpful. Click on the pic above or here….

This class was created with two purposes in mind:

  • To share some of the many lessons learned over the past several years working with red9’s great code base
  • To provide a basic foundation of knowledge for those wanting to delve into Morpheus Rig 2’s continued development.

Some might wonder what reason you might want to use red9’s code base or what benefits in particular you might find. The easiest way to give a quick example would be to provide a code example of a typical rigging task but with and without meta. Let’s look at something one does pretty regularly while rigging – do some stuff on a given joint chain.

Note — this exercise was painful to write as I’d forgotten most of the standard calls and ways to do stuff as so much is just built in now…

First, open up maya and make an amazing joint chain. If it’s not amazing, that’s okay – start over and do it again.

Here’s some standard code based on a selected joint chain:

def jointStuff_standard():
    l_joints = = 1)

    for jnt in l_joints:#Validation loop before doing stuff...
        if not mc.objectType(jnt) == 'joint':
            raise ValueError,"Not a joint: {0}".format(jnt)
    for i,jnt in enumerate(l_joints):   
        #First we're gonna create a curve at each joint. Name, parent and snap it ...
        jnt = mc.rename(jnt,"ourChain_{0}_jnt".format(i))#...gotta catch stuff when you rename it
        str_crv = = [1,0,0], ch = 0)[0]
        str_crv = mc.parent(str_crv,jnt)[0]#...gotta catch stuff when you parent it
        str_crv = mc.rename(str_crv, '{0}_crv'.format(jnt))#...everytime it changes
        mc.delete(mc.parentConstraint(jnt, str_crv, maintainOffset = False))
        #Now we wanna add a locator at each joint - matching, position,orientation,rotation order
        loc = mc.spaceLocator(n = "{0}_loc".format(jnt))[0]
        ro = mc.getAttr('{0}.rotateOrder'.format(jnt))
        mc.delete(mc.parentConstraint(jnt, loc, maintainOffset = False))
        #Now if we wanna store data on each object one to another...
        mc.addAttr (jnt, ln='curveObject', at= 'message')
        mc.connectAttr ((str_crv+".message"),(jnt+'.curveObject'))
        mc.addAttr (str_crv, ln='targetJoint', at= 'message')
        mc.connectAttr ((jnt+".message"),(str_crv+'.targetJoint'))    
        mc.addAttr (loc, ln='source', at= 'message')
        mc.connectAttr ((jnt+".message"),(loc+'.source'))          
        #...the contains none of the built in checking and verifying built in to metaData and if you tried to this on message attributes that existed or were locked or 15 other scenerios, it would fail

Here’s meta code. Simpler. Clearer. Much faster to write.

def jointStuff_meta():
    ml_joints = cgmMeta.validateObjListArg(, mayaType = 'joint')#...gets us a validated meta data list of our selection
    for i,mJnt in enumerate(ml_joints):
        mi_crv = r9Meta.MetaClass( = [1,0,0], ch = 0)[0])
        mc.parent(mi_crv.mNode, mJnt.mNode)
        mi_crv.rename('{0}_crv'.format(mJnt.p_nameBase))#...p_nameBase property cgmMeta only
        mc.delete(mc.parentConstraint(mJnt.mNode, mi_crv.mNode, maintainOffset = False))
        #...same data storage
        mJnt.doLoc()#..doLoc cgmMeta only

If this looks like something you’d like to delve into, check out the class. I wish there was a class like this out there when I started with the meta stuff 4 years ago. Hope you find it helpful:)


Thu, Jul 21st, 2016
posted by jjburton 02:07 PM

As I was prepping Morpheus Rig for public dev release I found some pretty awful slowdowns in our code base. As I’m also working on an Intro to Meta course for Rigging Dojo, it seemed like a good time to resolve some of those issues.

So that was most of this week.

Before digging in,  a little foundation. Our code base is a meta data system that relies heavily on red9’s MetaClass and caching in order to function. So when I dug into issues I needed to find if they were on our end or optimizations that could happen in red9 itself.

How does one start to analyze where the slow downs are and fixing them? I’m sure there are more intelligent and efficient ways but being mostly a self taught coder I decided to lean on my junior high science lesson of using the scientific method – namely devising questions and seeking to answer them with simple direct tests. So to start I came up with some questions I wanted to answer.


  • Does scene size have an effect on certain calls?
  • Does cache size have an effect?
  • Are there things that when iterated on increase the speed at which the next exact same call happen?
  • Are there ways to make failed metaclass nodes fail sooner, with fewer errors and clearer ones to boot?


  • Unit tests in our code base made speed checking and function breaking much easier than not having that
  • Simple setup for iteration tests where I could easily change what was being called and then being able to check speed differentials between functions based on a given scene size of objects or iterating new objects every round

Here’s a sample test call (warning – it’s a bit messy):

def speedTest_substantiation(*args, **kws):
    Test for seeing how substantiation speeds are affected
    _d_build = {'network':'network'}    
    class fncWrap(cgmGeneral.cgmFuncCls):
        def __init__(self,*args, **kws):
            super(fncWrap, self).__init__(*args, **kws)
	    self._str_funcName = 'speedTest_substantiation'
            self._b_reportTimes = 1 #..we always want this on so we're gonna set it on
            self._b_autoProgressBar = True
            self._l_ARGS_KWS_DEFAULTS = [{'kw':'targetCount',"default":10,"argType":'int','help':"How many objects to create"},
                                         {'kw':'build',"default":'network',"argType":'string','help':"What kind of base node to build to test"},
            self.__dataBind__(*args, **kws)
            #Now we're gonna register some steps for our function...
            self.l_funcSteps = [{'step':'Validating Args','call':self._validate_},
                                {'step':'Build stuffs','call':self._buildStuff_},

        def _validate_(self):
            #self.int_iterations = int(cgmValid.valueArg(self.d_kws['iterations'],noneValid=False))
            self.int_targetCount = int(cgmValid.valueArg(self.d_kws['targetCount'],noneValid=False))
	    self.str_nodeType = self.d_kws['build']
            #self.l_valueBuffer = [i for i in range(self.int_iterations)]
            #self.log_debug("Debug in _validate_")
            #For each of our test values, we're gonna create a transform and store it
            #self.md_rootToChildren = {}
            self.l_times_1 = []
	    self.l_times_2 = []
	    self.l_times_3 = []
	    self.l_times_4 = []
	    self.l_roots_1  = []
	    self.l_roots_2 = []
	    self.l_roots_3 = []
	    self.l_roots_4 = []
	    self.l_objects = []
	def test1_func(self,string):
	    return r9Meta.MetaClass(string)          
	def test2_func(self,string):
	    #return cgmMeta_optimize.cgmNode2(string)
	    #return cgmMeta.cgmNode(string)
	    #return cgmMeta.validateObjArgOLD(string,'cgmObject',setClass = True)	
	    return cgmMeta.cgmNode(string)	    
	def test22_func(self,string):
	    #return cgmMeta.cgmObject(string)
	def call2_func(self):pass
	def test3_func(self,string):
	    #return string
	    #return string
	    #return cgmMeta.cgmObject(string)
	    #return cgmMeta_optimize.cgmNode2(string)
	    return cgmMeta.validateObjArg(string,'cgmNode',setClass = False)	    	    
	    #return cgmMeta.validateObjArg(string)
        def _buildStuff_(self):
            for i in range(self.int_targetCount):
                self.progressBar_set(status = ("Creating obj %i"%i), progress = i, maxValue = self.int_targetCount)
                #self.l_objects.append(mc.createNode( self.str_nodeType, n = "obj_{0}".format(i) ))
		_jnt = mc.joint(n = "obj_{0}".format(i) )
        def _iterate_(self):
	    self.call2_func = self.test2_func
	    """if self.str_nodeType == 'network':
		self.call2_func = self.test2_func
		self.call2_func = self.test22_func"""
            for i in range(self.int_targetCount):
                self.progressBar_set(status = ("Pass 1: Substantiating Call %i"%i), progress = i, maxValue = self.int_targetCount)		
		t1 = time.clock()	
		t2 = time.clock()
		t1 = time.clock()	
		t2 = time.clock()
		t1 = time.clock()	
		t2 = time.clock()
	def _reportHowMayaIsStupid_(self):
	    _m1_time = sum(self.l_times_1)
	    _m2_time = sum(self.l_times_2)
	    _m3_time = sum(self.l_times_3)
	    for i,t in enumerate(self.l_times_1):
                self.progressBar_set(status = ("Pass 1: Reporting %i"%i), progress = i, maxValue = len(self.l_times_1))				
		_dif1 = t - self.l_times_2[i]
		_dif2 = t - self.l_times_3[i]
		self.log_info("Step {0} | MetaClass: {1}| cgmNode: {2}(d{4}) | validate: {3}(d{5})".format(i,"%0.4f"%t,
	    self.log_info(cgmGeneral._str_headerDiv + " Times " + cgmGeneral._str_headerDiv + cgmGeneral._str_subLine)	
	    self.log_info("Count: {0} | MetaClass: {1} | cgmNode: {2} | validate: {3}".format(self.int_targetCount,
	    self.log_info("Method 1 | Start: {0} | End: {1} | Difference: {2} | Total: {3} ".format("%0.4f"%self.l_times_1[0],
	                                                                                            "%0.4f"%(self.l_times_1[-1] - self.l_times_1[0]),
	    self.log_info("Method 2 | Start: {0} | End: {1} | Difference: {2} | Total: {3} ".format("%0.4f"%self.l_times_2[0],
                                                                                                    "%0.4f"%(self.l_times_2[-1] - self.l_times_2[0]),
	    self.log_info("Method 3 | Start: {0} | End: {1} | Difference: {2} | Total: {3} ".format("%0.4f"%self.l_times_3[0],
	                                                                                            "%0.4f"%(self.l_times_3[-1] - self.l_times_3[0]),
	    self.log_info("Compare 2:1| Dif: {0} | Dif: {1} |                    Total: {2} ".format("%0.4f"%(self.l_times_1[0] - self.l_times_2[0]),
                                                                                                    "%0.4f"%(self.l_times_1[-1] - self.l_times_2[-1]),
                                                                                                    "%0.4f"%(_m1_time - _m2_time)))  
	    self.log_info("Compare 3:1| Dif: {0} | Dif: {1} |                    Total: {2} ".format("%0.4f"%(self.l_times_1[0] - self.l_times_3[0]),
	                                                                                             "%0.4f"%(self.l_times_1[-1] - self.l_times_3[-1]),
	                                                                                             "%0.4f"%(_m1_time - _m3_time)))   	    	    
    return fncWrap(*args, **kws).go()

Here’s the output…

speedTest_substantiation >> Step 998 | MetaClass: 0.0019| cgmNode: 0.0008(d0.0011) | validate: 0.0015(d0.0004)
speedTest_substantiation >> Step 999 | MetaClass: 0.0019| cgmNode: 0.0008(d0.0011) | validate: 0.0013(d0.0006)
speedTest_substantiation >> /// Times ///----------------------------------------------------------------------------------------------------
speedTest_substantiation >> Count: 1000 | MetaClass: 2.0867 | cgmNode: 0.8801 | validate: 1.5791
speedTest_substantiation >> Method 1 | Start: 0.0022 | End: 0.0019 | Difference: -0.0003 | Total: 2.0867 
speedTest_substantiation >> Method 2 | Start: 0.0010 | End: 0.0008 | Difference: -0.0002 | Total: 0.8801 
speedTest_substantiation >> Method 3 | Start: 0.0017 | End: 0.0013 | Difference: -0.0003 | Total: 1.5791 
speedTest_substantiation >> Compare 2:1| Dif: 0.0012 | Dif: 0.0011 |                    Total: 1.2066 
speedTest_substantiation >> Compare 3:1| Dif: 0.0006 | Dif: 0.0006 |                    Total: 0.5076 
speedTest_substantiation >>  [TIME] -- Step: 'Report' >>  5.392 
speedTest_substantiation >> /// Times ///----------------------------------------------------------------------------------------------------
# Warning: cgm.core.cgm_General :  speedTest_substantiation >> /// Total : 10.829 sec ///---------------------------------------------------------------------------------------------------- # 

Issues and Solutions

  • General
    • It doesn’t appear to be the iterating itself that is causing the slowdown but some other process
    • Reloading meta resets the slowdown to base (after the file new/open fix)
  •  cgm
    • cgmNode was much slower than a MetaClass node
      • Short version – I had a list.extend() when I should have had a if a not in list:list.append()
      • Long Version – Tracked down an issue where everytime cgmNode was called ( a lot), it was micrscopically increasing the speed of the next call. On a subclass to r9Meta.MetaClass I was extending the UNMANAGED class list with some attributes on my root subclass’s __init__ doing so was adding duplicate attributes to that list any time my subclass was substantiated after initial reload of Meta. That fact caused some of the subfunctions to add that number of steps everytime they called. So long story short, every time my subclass substantiated after a meta reload it got minisculely slower. However, when that call happens tens/hundreds of thousands of times, it added up.
      • Also was curious if having properties or too many functions would cause a slow down in substantiation speeds and the answer is, not really.
      • I was was also concerned that use of a function class I’d been experimenting with was causing slow down and I didn’t come to a full answer on this one yet.
      • autofill flag – There is a flag in MetaClass for autofilling attrs for auto completion to work. Turns out it’s a pretty big hit. Changed our autofill to off and it’s considerably faster than MetaClass.
        • 1000 joint test – red9.MetaClass(autofilldefault) – 2.0699s | cgmNode – .8944s  | validateObjArg – 1.5777s
        • 1000 joint test – red9.MetaClass(autofill – False) – 1.s | cgmNode – .8944s | validateObjArg – 1.5777s
    • validateObjArg was dog slow
      • Completely rewrote this
      • Decided to go at it a different way and found some nice savings
      • for meta node conversion  — Post rewrite – 1000 node conversion test – red9 – 238.129s | cgm – 8.965s
  • red9
    • Reloading red9 introduced an appended file new/open check everytime. This a growing list of errors in the script editor and increased file new/open times.
      • Code change suggested to red9
    • 3 issues in one – 1) A single meta node that had been deleted generated up to 6 errors on an empty scene. This of course grows the bigger the scene is. and 2)error messages were non specific in nature providing no insight to what errors were happening . 3) a corrupted node can made the cache break when called
      • Proposed two additional MetaClass attrs to store _LastDagPath and _lastUUID – these are displayed when a node fails to know what failed
      • Proposed allowing failed nodes to attempt to auto remove themselves from the cache when they fail
      • Proposed some changes that immediately raise an exception rather than keeping processing to get to a failed node state as quickly as possible
    • convertMClassType gets slower the denser the scene
      • rewrote cgmMeta.valiateObjArg. Will talk to Mark on this one.
    • Hierarchical depth has a direct influence on substantiation speeds
      • Created test where for each iteration a new joint is created and parented to the last so at the end you have a 1000 joint chain
      • Base results- red9.MetaClass – start :.001s | end: .018s | total: 8.837s
      • Oddly enough, if you pass shortNames of the children joints on call instead of the .mNode strings (long name), it cuts the end per time from .018 to .010 for a total of 5.571s
      • Talking to Mark on this one.

Why should you care?

The end result of this pass is that a crazy 5 hour rig build anomaly for Morpheus was parred down to 40 minutes after the cgmNode fixes and 31 minutes after the cgmValidateObjArg rewrite. This is in 2011. Never versions of maya are more efficient and it will get better still as we more through optimizing more.

Note, none my optimizations are in red9’s core yet. Mark is on vacation and most of those fixes wouldn’t help anyone but a coder.





Wed, May 4th, 2016
posted by jjburton 09:05 PM

To anyone who’s worked with coding blendshape stuff it can be tedious especially when you bring in inbetweens.  Thankfully, Autodesk is fixing a lot of that with 2016 extension 2 if you missed that update but there are still folks using older versions and it doesn’t resolve everything. We have to deal with them a good bit on Morpheus 2 and so we wrote a metaclass to deal with them.

Initial features of the cgmBlendshape metaclass that you can’t easily do with normal api or mc/cmd calls:

  • Most functions work off of index/weight or shape/selection format
  • Easy alias naming
  • Replacing shapes — change out shapes in place keeping inbetweens and connections intact
  • Extract shapes — extract shapes from index/weight calls and supporting multipliers to the delta difference
  • Shape restoration — replace deleted shapes on the fly. Recreate a shape from delta and base information and plug it back in for further editing
  • Subclass to cgmNode to all those functions carry over as well
  • Tested in 2011 and 2016
  • NOTE – this is  wip metaclass and will undergo lots of changes

Before we get into the the specifics of the metaclass, here’s some general lessons learned on blendshapes working through this.

  • A blendshape target has several bits of important information
    • Index — this is it’s index in the blendshape node. Note – not necessarily sequential.
    • Weight — this is the value at which this shape is ‘on’. Usually it is 1.0. Inbetween shapes are between 0 and 1.0.
    • Shape — this is the shape that drives the blendshape channel
    • Dag — the dag node for the shape
    • Alias — the attribute corresponding to its index in the weight list. Typically it is the name of the dag node.
    • Plug — the actual raw attribute of the shape on the node. ‘BSNODE.w[index]’
    • Weight Index — follows a maya formula of index = wt * 1000 + 5000. So a 1.0 weight is a weight index of 6000.
  • The way maya stores info
    • Blendshape data is stored in these arrays in real time so that if you query the data and your base mesh isn’t zeroed out, the transformation happening is baked into that
    • The caveat to that is that targets that have their base geo deleted are ‘locked’ in to their respective data channels at the point they were when deleted. Their delta information is frozen.
    • BlendshapeNode.inputTarget[0].inputTargetGroup[index].inputTargetItem[weightIndex]
      • inputTarget — this is most often 0.
      • inputTargetGroup — information for a particular shape index
      • inputTargetItem — information for a particular weight index
    • Sub items at that index
      • inputPointsTarget — the is the differential data of the point positions being transformed by a given shape target. It is indexed to the inputComponentsTarget array
      • inputComponentsTarget — these are the compents that are being affected by a given shape
      • inputGeomTarget — this is the geo affecting a particular target shape
  • Replacing blendshapes – you can 1) use a copy geo function if the point count is exact to change the shape to what you want or 2) make a function to do it yourself. There’s not a great way to replace a shape except to rebuild that whole index or the node itself. We made a function to do that
  • Once a blendshape node is created with targets, the individual targets are no longer needed and just take up space. Especially when you have the easy ability to extract shapes.
  • Getting a base for calculating delta information. As the blendshapes are stored as delta off of the base, the best way I could find to get that delta was to turn off all the deformers on the base object, query that and then return on/connect the envelopes. I’m sure there’s more elegant solutions but I was unsuccessful in finding one.
    • Once you have that creating a new mesh from a an existing one is as simple as:
      • Taking base data
      • For components that are affected on a given index/weight: add the delta to base
      • duplicating the base and xform(t=vPos, absolute = True) each of the verts will give you a duplicate shape
  • Aliasing weight attributes – mc.aliasAttr(‘NEWNAME’, ‘BSNODE.w[index]’)

Here’s a dummy file I used for testing:

Here’s some code to play with the first iteration. You’ll need to grab the MorpheusDev branch on bitbucket if you wanna play with it till I push it to the main branch.

Author: Josh Burton

Website :

Help for learning the basis of cgmMeta.cgmBlendshape
from cgm.core import cgm_Meta as cgmMeta
cgm.core._reload()#...this is the core reloader

#>> cgmMeta.cgmBlendshape
import maya.cmds as mc

#You MUST have the demo file to work though this exercise though you could probably glean the gist without it with your own  setup

#>>Starting off =========================================================================
bs1 = cgmMeta.cgmBlendShape('pSphere1_bsNode')#...let's initialize our blendshape
bs1._MFN you'll find the api blendshape deformer call should you be inclined to use it

#>>bsShape Functions =========================================================================
#We're referring to the shapes that drive a blendshape nodeds base object here and the functions relating to them
#Doing this first will make the blendshape wide functions make more sense on the queries and what not.

bs1.bsShape_add('base1_add')#...we're gonna add a new shape to our node. Since no index is specified, it just chooses the next available
bs1.bsShape_add('base1_add', 8)#...let's  specify an index
#...hmm, our add throws an error because that name is taken. let's fix it
bs1.bsShape_add('base1_tween', 0, weight = .5)#...we're gonna add a new inbetween shape by it's geo, index, and weight

#Replace functions...
#...replacing is not something easily done in basic maya calls
bs1.bsShape_replace('base1_replace','base1_target')#...replace with a "from to"" call. 
bs1.bsShape_replace('base1_target','base1_replace')#...and back

#...Note - the inbetween is intact as is the driver connection
bs1.bsShape_replace('base1_replace',0)#...indice calls also work for most calls

#An index for use with working with blendshapes needs to have an index and weight in order to know what you're working with
bs1.bsShape_index('base1_target')#...this will return a list of the indices and weights which this target affects in [[index,weight],...] format
bs1.bsShape_index('base1_add')#...this will return a list of the indices and weights which this target affects in [[index,weight],...] format

bs1.bsShape_getTargetArgs('base1_target')#...this returns data for a target in the format excpected by mc.blendshape for easier use in nested list format

#>>Blendshape node wide functions =========================================================================
bs1.get_targetWeightsDict()#...this is a handy call for just getting the data on a blendshape in {index:{weight:{data}}} format
bs1.get_indices()#...get the indices in use on the blendshape from the api in a list format
bs1.bsShapes_get()#...get our blendshape shapes that drive our blendshape
bs1.get_baseObjects()#...get the base shapes of the blendshape or the object(s) the blendshape is driving
bs1.get_weight_attrs()#...get the attributes on the bsNode which drive our indices
bs1.bsShapes_get()#...get our shapes

#>>Arg validation =========================================================================
bs1.bsShape_validateShapeArg() target specified, error
bs1.bsShape_validateShapeArg(0)#...more than one entry, error
bs1.bsShape_validateShapeArg(0, .5)#...there we go

#Generating geo...
#Sometimes you wanna extract shapes from a blendShape node. Let's try some of that
bs1.bsShape_createGeoFromIndex(0)#...will create the a new piece of geo matching the 1.0 weight at 1.0
bs1.bsShape_createGeoFromIndex(0,.5)#...will get you the inbetween
bs1.bsShape_createGeoFromIndex(3)#...will get you squat because nothing is there
bs1.bsShape_createGeoFromIndex(0, multiplier = 2.0) can also generate factored targets
bs1.bsShape_createGeoFromIndex(0, multiplier = .5)#...

bs1.bsShapes_delete()#...delete all the targets for your blendshape.
#...ah geeze I didn't mean to do that. No worries!
bs1.bsShapes_restore()#...rebuilds the targets and plugs them back in