Archive for the 'Announcements' Category

Thu, Jul 21st, 2016
posted by jjburton 02:07 PM

As I was prepping Morpheus Rig for public dev release I found some pretty awful slowdowns in our code base. As I’m also working on an Intro to Meta course for Rigging Dojo, it seemed like a good time to resolve some of those issues.

So that was most of this week.

Before digging in,  a little foundation. Our code base is a meta data system that relies heavily on red9’s MetaClass and caching in order to function. So when I dug into issues I needed to find if they were on our end or optimizations that could happen in red9 itself.

How does one start to analyze where the slow downs are and fixing them? I’m sure there are more intelligent and efficient ways but being mostly a self taught coder I decided to lean on my junior high science lesson of using the scientific method – namely devising questions and seeking to answer them with simple direct tests. So to start I came up with some questions I wanted to answer.

General:

  • Does scene size have an effect on certain calls?
  • Does cache size have an effect?
  • Are there things that when iterated on increase the speed at which the next exact same call happen?
  • Are there ways to make failed metaclass nodes fail sooner, with fewer errors and clearer ones to boot?

Process

  • Unit tests in our code base made speed checking and function breaking much easier than not having that
  • Simple setup for iteration tests where I could easily change what was being called and then being able to check speed differentials between functions based on a given scene size of objects or iterating new objects every round

Here’s a sample test call (warning – it’s a bit messy):

Here’s the output…

Issues and Solutions

  • General
    • It doesn’t appear to be the iterating itself that is causing the slowdown but some other process
    • Reloading meta resets the slowdown to base (after the file new/open fix)
  •  cgm
    • cgmNode was much slower than a MetaClass node
      • Short version – I had a list.extend() when I should have had a if a not in list:list.append()
      • Long Version – Tracked down an issue where everytime cgmNode was called ( a lot), it was micrscopically increasing the speed of the next call. On a subclass to r9Meta.MetaClass I was extending the UNMANAGED class list with some attributes on my root subclass’s __init__ doing so was adding duplicate attributes to that list any time my subclass was substantiated after initial reload of Meta. That fact caused some of the subfunctions to add that number of steps everytime they called. So long story short, every time my subclass substantiated after a meta reload it got minisculely slower. However, when that call happens tens/hundreds of thousands of times, it added up.
      • Also was curious if having properties or too many functions would cause a slow down in substantiation speeds and the answer is, not really.
      • I was was also concerned that use of a function class I’d been experimenting with was causing slow down and I didn’t come to a full answer on this one yet.
      • autofill flag – There is a flag in MetaClass for autofilling attrs for auto completion to work. Turns out it’s a pretty big hit. Changed our autofill to off and it’s considerably faster than MetaClass.
        • 1000 joint test – red9.MetaClass(autofilldefault) – 2.0699s | cgmNode – .8944s  | validateObjArg – 1.5777s
        • 1000 joint test – red9.MetaClass(autofill – False) – 1.s | cgmNode – .8944s | validateObjArg – 1.5777s
    • validateObjArg was dog slow
      • Completely rewrote this
      • Decided to go at it a different way and found some nice savings
      • for meta node conversion  — Post rewrite – 1000 node conversion test – red9 – 238.129s | cgm – 8.965s
  • red9
    • Reloading red9 introduced an appended file new/open check everytime. This a growing list of errors in the script editor and increased file new/open times.
      • Code change suggested to red9
    • 3 issues in one – 1) A single meta node that had been deleted generated up to 6 errors on an empty scene. This of course grows the bigger the scene is. and 2)error messages were non specific in nature providing no insight to what errors were happening . 3) a corrupted node can made the cache break when called
      • Proposed two additional MetaClass attrs to store _LastDagPath and _lastUUID – these are displayed when a node fails to know what failed
      • Proposed allowing failed nodes to attempt to auto remove themselves from the cache when they fail
      • Proposed some changes that immediately raise an exception rather than keeping processing to get to a failed node state as quickly as possible
    • convertMClassType gets slower the denser the scene
      • rewrote cgmMeta.valiateObjArg. Will talk to Mark on this one.
    • Hierarchical depth has a direct influence on substantiation speeds
      • Created test where for each iteration a new joint is created and parented to the last so at the end you have a 1000 joint chain
      • Base results- red9.MetaClass – start :.001s | end: .018s | total: 8.837s
      • Oddly enough, if you pass shortNames of the children joints on call instead of the .mNode strings (long name), it cuts the end per time from .018 to .010 for a total of 5.571s
      • Talking to Mark on this one.

Why should you care?

The end result of this pass is that a crazy 5 hour rig build anomaly for Morpheus was parred down to 40 minutes after the cgmNode fixes and 31 minutes after the cgmValidateObjArg rewrite. This is in 2011. Never versions of maya are more efficient and it will get better still as we more through optimizing more.

Note, none my optimizations are in red9’s core yet. Mark is on vacation and most of those fixes wouldn’t help anyone but a coder.

j@cgm

 

 

 

Wed, May 4th, 2016
posted by jjburton 09:05 PM

To anyone who’s worked with coding blendshape stuff it can be tedious especially when you bring in inbetweens.  Thankfully, Autodesk is fixing a lot of that with 2016 extension 2 if you missed that update but there are still folks using older versions and it doesn’t resolve everything. We have to deal with them a good bit on Morpheus 2 and so we wrote a metaclass to deal with them.

Initial features of the cgmBlendshape metaclass that you can’t easily do with normal api or mc/cmd calls:

  • Most functions work off of index/weight or shape/selection format
  • Easy alias naming
  • Replacing shapes — change out shapes in place keeping inbetweens and connections intact
  • Extract shapes — extract shapes from index/weight calls and supporting multipliers to the delta difference
  • Shape restoration — replace deleted shapes on the fly. Recreate a shape from delta and base information and plug it back in for further editing
  • Subclass to cgmNode to all those functions carry over as well
  • Tested in 2011 and 2016
  • NOTE – this is  wip metaclass and will undergo lots of changes

Before we get into the the specifics of the metaclass, here’s some general lessons learned on blendshapes working through this.

  • A blendshape target has several bits of important information
    • Index — this is it’s index in the blendshape node. Note – not necessarily sequential.
    • Weight — this is the value at which this shape is ‘on’. Usually it is 1.0. Inbetween shapes are between 0 and 1.0.
    • Shape — this is the shape that drives the blendshape channel
    • Dag — the dag node for the shape
    • Alias — the attribute corresponding to its index in the weight list. Typically it is the name of the dag node.
    • Plug — the actual raw attribute of the shape on the node. ‘BSNODE.w[index]’
    • Weight Index — follows a maya formula of index = wt * 1000 + 5000. So a 1.0 weight is a weight index of 6000.
  • The way maya stores info
    • Blendshape data is stored in these arrays in real time so that if you query the data and your base mesh isn’t zeroed out, the transformation happening is baked into that
    • The caveat to that is that targets that have their base geo deleted are ‘locked’ in to their respective data channels at the point they were when deleted. Their delta information is frozen.
    • BlendshapeNode.inputTarget[0].inputTargetGroup[index].inputTargetItem[weightIndex]
      • inputTarget — this is most often 0.
      • inputTargetGroup — information for a particular shape index
      • inputTargetItem — information for a particular weight index
    • Sub items at that index
      • inputPointsTarget — the is the differential data of the point positions being transformed by a given shape target. It is indexed to the inputComponentsTarget array
      • inputComponentsTarget — these are the compents that are being affected by a given shape
      • inputGeomTarget — this is the geo affecting a particular target shape
  • Replacing blendshapes – you can 1) use a copy geo function if the point count is exact to change the shape to what you want or 2) make a function to do it yourself. There’s not a great way to replace a shape except to rebuild that whole index or the node itself. We made a function to do that
  • Once a blendshape node is created with targets, the individual targets are no longer needed and just take up space. Especially when you have the easy ability to extract shapes.
  • Getting a base for calculating delta information. As the blendshapes are stored as delta off of the base, the best way I could find to get that delta was to turn off all the deformers on the base object, query that and then return on/connect the envelopes. I’m sure there’s more elegant solutions but I was unsuccessful in finding one.
    • Once you have that creating a new mesh from a an existing one is as simple as:
      • Taking base data
      • For components that are affected on a given index/weight: add the delta to base
      • duplicating the base and xform(t=vPos, absolute = True) each of the verts will give you a duplicate shape
  • Aliasing weight attributes – mc.aliasAttr(‘NEWNAME’, ‘BSNODE.w[index]’)

Here’s a dummy file I used for testing:

https://www.dropbox.com/s/k4i8oo8qyiv3fd6/cgmBlendshape_test.mb?dl=0

Here’s some code to play with the first iteration. You’ll need to grab the MorpheusDev branch on bitbucket if you wanna play with it till I push it to the main branch.

Fri, Apr 22nd, 2016
posted by jjburton 12:04 PM

 

proximeshwrap

So, finally wrapped up my work for Morpheus 2 in regards to the wrap setups. As you can see from the last two trips on the rabbit trail(Step 1, Step 2), this wasn’t exactly a simple process.

The point of all of this is to be able to bake blendshapes reliably to nonconforming geo while affecting only the regions we want without having to go in and tweak things by hand. This will prove more and more useful as customization option expand. Why bother with this? Wrap deformers are exceedingly slow. Being able to replace them with skinning data and copying blendshapes between meshes will make your animations play faster and feel more interactive. The final solution was to create proximity geo which is localized to the area I want to affect the nonconforming target mesh. The proximesh is wrapped to the base driver and the target is wrapped to the proximesh.

target –[wraps to]—>> proximesh –[wraps to]–>> base

Here’s a general breakdown of the baking function:

  1. Given a source mesh that has the blendshapes and a nonconforming target mesh we want them on…
  2. Go through all blendshape channels on the source mesh and…
    1. Get their connections/values
    2. Break all connections/zero values so we have clean channels to push our specific shapes to our target mesh
  3. Generate a proximesh of the source with the area we want influencing our nonconforming mesh
  4. Generate a duplicate target mesh so we’re not messing with that mesh
  5. Wrap the proximesh to the source mesh
  6. Wrap the duplicate target mesh to the base mesh
  7. Go through all the blendshape channels on the source mesh and…
    1. Turn a channel on
    2. Duplicate our wrapped target mesh to create a new mesh with the blendshape data on it pushed through by the wrap
    3. If we’re going to cull no change shapes – check each generated shape against the target mesh to figure out when are not moving any verts and delete those offenders
  8. Go through all the original blendshape channels again and rewire them as they were before our function
  9. Delete the wraps and temporary geo
  10. If desired, create a new blendshape node with our final list of baked targets on our nonconforming base mesh
  11. If desired wire the new blendshape node to match the original one we baked from so the channels follow one another.

Easy peasy:)

Functions created while working through it:

  •  cgm.lib.deformers
    • proximityWrapObject — This was the solution in the end to getting rid of movement in the mesh in areas I didn’t want affected.
    • influenceWrapObject — See step one above. Dead end but might prove useful in the future
    • bakeBlendShapeNodesToTargetObject — Greatly expanded this during this little journey
      • Added wrapMethod — influence wrap and proximity wrap and associated flags
      • Added cullNoChangeGeo — removes baked targets that don’t move the base mesh within the given tolerance
  •  cgm.core.lib.geo_Utils
    • is_equivalent — Function comparing points of to pieces of geometry to see if their components match in object space. Useful for culling out empty blendshape targets that have been baked. Supports tolerance in checking as well
    • get_proximityGeo — In depth function for returning geo within range of a source/target object setup. Search by boundingbox and raycasting to find geo within the source. Can return objects,faces,edges,verts or proximity geo which is new geo from the targets that corresponds to the search return

Lessons Learned for wraps in general

  • The maya command call to create a node is mc.CreateWrap (in 2011 at least). I hope later versions made it easier as
  • The object you wrap your target two gets two attributes (dropoff and smoothness) that dicatate how the wrap on your target is affected. No idea why it took me this long in maya to notice that in the docs.
  • Simply using Maya wrapDeformer to wrap an object to another when the object to be wrapped doesn’t conform to the target geo is a bad idea. You’ll get movement in your wrap geo where you don’t want it.

Long story short. The wrap problem is resolved for Morpheus 2.0.

For now. 🙂

Sat, Feb 13th, 2016
posted by jjburton 08:02 PM

Released a build of Morpheus 2 this week and immediately ran into some issues with the marking menu and hot keys. I’d been using zooToolbox’s setup for years for hot keys but it didn’t work with 2016 so I dug in.

Maya 2016 has a pretty neat new editor but it’s still probably more steps than most of our users could reliably follow so wanted to get the button push setup back.

There a few things to remember when working with hot keys and in this order…

  1. runTimeCommand– This is the code that gets run. It can be python or mel
  2. nameCommand — This is required for a hot key to be setup properly
  3. hotkeySet — This is something new with 2016 and needs to be set to a new set to be able to add a new hot key because the default set is unchangable
  4. savePrefs — after setting up your hotkey, you must save the prefs or the newly created hotkeys go away (not sure if this is new to 2016 or not)

Lessons learned:

  • hotkeySets — were added in 2016. Any hotkey work you do post 2016 needs to account for them. I ended up having my stuff use the existing set if it wasn’t the default and create a new one if the default is the current one
  • hotkey -shiftModifier flag — this was added in 2016
  • Pushing dicts into mc/cmds calls — In general, something like mc.command(**_d) works with _d being your dict. However on mc.hotkey I found that the keyShortcut flag needed to be in the dict and at the start of the call to work: mc.hotkey(_k, **_d).

I ended up writing a handler and gui to set stuff up. I’ll swing back and talk about it another time if there’s interest.

Back to Morpheus…

 

 

Sat, Feb 6th, 2016
posted by jjburton 10:02 PM

As I’ve been closing in on finishing Morpheus 2 I found myself in need of a distributable skin data system to be able to apply skinning information to Morphy meshes after they’d been customized and no longer matched up with the base mesh. Not being able to find a good way of doing it in natively to Maya and not finding any open source options, writing our own was the only way forward.

Thanks to Alex Widener and Chad Vernon for some tech help along the way.

Before delving in here, here’s some lessons learned along the way.

  • mc.setAttr — found this to be a unreliable method of setting weights via the ‘head_geo_skinNode.weightList[0].weights[0]’ call convention. Just didn’t seem to set properly via any call but the api.
  • mc.skinPercent — call is hopelessly slow and should never be used for intensive work. A query loop went from 78 seconds to run to 1.3 simply using an api weights data call even with having to re-parse the weights data to a usable format.
  • weights — speaking of, this was an obtuse concept to me. This is regards to the doubleArray list used with  an MFnSkinCluster. In short the easist way to get to a spot in this data set is as follows:
    • weights.set(value, vertIdx*numInfluences+jointIdx)
    • weights — doubleArray list instance
    • value is the given value you want
    • vertex index * the number of incluences + the joint index = the index in the array
  • Normalizing skin data — You usually want your skin values to add up to 1.0, so here’s a chunk to help

The initial list or requirements for the functions were as follows:

  • Readable data format — decided on configobj having used it with some red9 stuff and finding it easy to use.
  • Export/import data sets
  • Work completely from the data file for reference (no source skin necessary)
  • Work with different vertex counts if similar shape
  • Used indexed data sets for easy remapping of influences

With that being said. Here’s the demo file link and you’ll need the latest cgm package to follow along. Open up a python tab in the script editor and try these things one line at a time.

This is a first pass on this thing till Morphy 2 is done.

Cheers!
j@cgm

Mon, Mar 2nd, 2015
posted by jjburton 01:03 PM

So,  a chat with a good buddy (Scott Englert) on the topic brought up one more item I’d neglected to rule out in the previous bit of research.

Flush prefs.

Dur…

So I did and the issue vanished, which led me to delve into what script or preference was causing issues. Did this by re-adding files and scripts to figure out what it was. As it happens it wasn’t a specific file but an import that did it. In the end, I found 2 issues at play – one of which is solved but broke my unit tests so I’m waiting to for Red9 to figure out how to resolve. In this case it was registering ‘transform’ as a nodeType in r9Meta stuff in one of my __init__ files.

Hopefully finding this will make the red9 stuff which we love all the better once it’s resolved and keep others from bounding into the time sucking vortex that resulted.

Lesson learned – if something is acting weird, clear prefs and see if it fixes it and if it does, start adding stuff back till you find the culprit.

 

Sun, Feb 15th, 2015
posted by jjburton 01:02 PM

While doing some optimization on Morpheus 2 and incorporating some of Red9‘s latest stuff I noticed an odd bottleneck in the code.  So I decided to dig in to it.

For those short of time, the short of it:

  • Maya’s duplicate command get’s slower based on scene complexity – regardless of manually calling or doing it through the interface
  • Maya’s duplicate command (and perhaps others) get slower based on how long you’ve been doing stuff in maya

I noticed that a relatively simple step in one of my joint chain functions was oddly slow and delved into it. In a empty or lite scene it was pretty instantaneous but in a regular scene got really slow. Dug in and came to just being duplicate being slow. That was my theory at least , so I wrote series of tests to verify. The first of those being a test that given a number of times to iterate and a number of children joints, the test will 1) create a joint chain of y joints and 2) duplicate that root joint iterating on the provided number. 

The results showed a pretty linear line of increasing speed as the scene added more objects. The more objects, the longer things got. Interesting but not enough to go on.

The second series of tests I wrote my own simple joint duplicate function using mc.joint and matching positioning, rotateOrder etc. I also checked some other items to eliminate those as possible hindrances or see if they affected speeds:

  • Undo – No difference whether it is on or off
  • History – No history on joints
  • Connections — Only connection on any tested joints in inverseScale
  • Flags — Tried all combinations on the duplicate command I could think of to no avail
My rewrite is always the same speed regardless of complexity, mc. duplicate get’s progressively slower as it goes on. Here are some results:

Breakpoint is the iteration at which my rewrite is faster than mc.duplicate for that run. Method 1 is mc.duplicate and Method 2 is my own.

How about some code you can play with yourself with a simple locator duplication?

Here are some results of this standalone for me at least:

Note – the 2015 run was a fresh open of the software and my experience doing this testing would see that getting slower.

If you run the test you’ll see for yourself the slow down. Now, what do we do with this?Working through this I created my own duplicator for curves, joints and locators. For now, I’m only going to use the joint one for Morpheus building until I can do some more testing and maybe get a better handle on it but it’s certainly an oddity.

Odder yet is the longer you do stuff in Maya, duplicate gets slower still. This I tested after noticing that after being away a bit that Windows had rebooted and suddenly duplicate was posting much better results. After a while, that slowed down. So I rebooted myself and yup, it’s faster after a reboot.  I have no idea on this one other than maybe a memory leak or..I dunno, I’m a hack at this stuff and that’s the best I got:)

If you’re interested, let me know what you find – different conclusions?

For now, Bokser told me I have to move on:)
j@cgm

Thu, Dec 26th, 2013
posted by jjburton 03:12 PM

Introduction
We have partnered with Rigging Dojo on creating special project “Rigging for Morpheus” that aims to cover learning how to integrate the red9 tools, how to work with an existing open source library and with set standards so that students can build tools, expand on/improve and contribute to the opensource CGMonks Took kit as well as the Morpheus rig project.

This isn’t a class to learn to use Morpheus – that will be handled via some vids at release. This is for those td’s who wanna join in in making as great a shared tool set as possible.

Class is kicking off next month. Registration can be found here:
http://www.riggingdojo.com/home/registration/

All registrations must be in by 31 December, 2013

Studio Portion
The specifics of what that studio session will cover will be subject to what the ‘students’ want to do. If everyone wants to work on customization stuff, then we’ll work on that together, if one student has a great idea for a tool and just needs help learning how to build that and integrate it to an existing ‘pipeline’, we’d support that student on that and maybe other folks on something else.

Potential Paths

  1. Customization — Work through building guts and tech for the finalizing the Morpheus 2 customization setup. This would be tackling such exciting items as – blendshape libraries, hair, prop clothing implementation. Definitely a beard. Morpheus needs a beard.
  2. New module — Build a new module for the system like a wing, long neck head, quad leg or perhaps some more mechanical setups. This would entail designing and implementing template objects and learning to write and push that through to a final rig item that ties in to the rest of the modular rigger.
  3. Facial system — Work through learning the new methods and tools of the Morpheus 2 facial rig. Polishing, adding features and learning how to build on it.
  4. New/Existing Tool Pipeline — Learn the bits of designing and writing a new tool for maya – from gui, designing for user friendliness, breaking down the tool to build-able bits, debugging and more. If students have tool designs we can start from scratch or we can grab from the ever growing pile of internal ideas from CG Monks.

Cost
Class (8 week intensive apprenticeship)
$1226 – base cost
There are discounts available for Morpheus 2 backers. See the development forum post for  more info.

Wed, Jan 2nd, 2013
posted by jjburton 02:01 PM

Hey Morphean stylists! Ever walked out of the barbershop not pleased with your cut and was too shy to admit it? WELL THEN! Hide no more dear friends! For tonight we dine in HELL! (sorry wrong quote) For tonight we control of the scissors! *cue explosions* That’s right! Time to put the pens and pencils on the paper (or fancy tablets) and style our Morpheus up with a fresh coiffure! That’ll teach that barber! *cue more explosions*

Morphy2HairTemplate_male Morphy2HairTemplate_female

Notes

  • Try to draw the hair in its neutral position (not in movement).
  • If your hair style is somewhat complexe please provide some general notes. You can scribble those on your template (dont overlap the drawing too much).
  • The styles will be split into 3 categories: Short (above shoulders), Medium (shoulder level) & Long (below shoulders) so please try your best to be clear on the length.
  • You don’t have to submit sparkling clean designs. Clear sketches are more than welcome.
  • No submission limits.

Template files

Requirements

  • Submit in PNG or JPEG.
  • Naming convention: YourName_HairCustomize_v00.ext (the versioning is not for WIP numbers but for your designs).
  • Submition deadline – January 24th at midnight.

Submissions
To submit your work, you can send it to my email: alexei.bresker (->at<-) gmail.com…sorry, for the spam bots inconvenience.
(subject: Morpheus Rig 2.0 – Hair Styles)

-Alexei (Morpheus 2.0 Design Producer)

Mon, Nov 12th, 2012
posted by jjburton 11:11 AM

Thank you, animation community!