Permission is granted for anyone to copy, use, modify, or distribute this program and accompanying programs and documents for any purpose, provided this copyright notice is retained and prominently displayed, along with a note saying that the original programs are available from our web page.
The programs and documents are distributed without any warranty, express or implied. As the programs were written for research purposes only, they have not been tested to the degree that would be advisable in any important application. All use of these programs is entirely at the user's own risk.
Note: this code is not as pretty nor as well documented as our CRBM demo. Therefore, I would recommend checking that out first before diving into this code.
The sample data we have included has been downloaded from the CMU
graphics lab motion capture library:
Several subroutines related to motion playback are adapted from Neil
Lawrence's Motion Capture Toolbox:
Several subroutines related to conversion to/from exponential map
representation are provided by Hao Zhang:
(Note: as of August 2008, it appears that Hao has moved and this link is no longer working)
How did you preprocess the data?
The data has been preprocessed in the same way as our CRBM demo. See the Notes/FAQ there. There are more details of preprocessing in Chapter 3 of my thesis.
For how many epochs should I train? How should I set the learning rates?
To make things run reasonably fast, I am training with a fairly high learning rate and for only 200 epochs. It would likely work better by slowing down all the learning rates by at least an order of magnitude and training for that much longer.