I led the sub-team that wrote the graphical output, and made decisions as to which API's were to be used. The timeframe of this project was four weeks; in retrospect, this was insufficient time to do the typical user-centered design process, and the focus was primarily on graphics. We decided that the user interface would mimic that of existing graphics applications to allow users to easily learn our application, using evolutionary (not revolutionary!) design. I sketched the prototypes and implemented the actual user interface with the open-source cross-platform wxWidgets API and the graphics output in OpenGL, with Jennifer Tsang's help. The original report we submitted was below, with screenshots.
- Stand-alone application
- Final project for CS 184 graphics — won first place out of 12 or so teams
- API's: wxWidgets, OpenGL
- Completed: Fall 2003
- Duration: Four weeks
IK Animator Report
- A user-friendly editing environment, complete with menus, a hotkey-equipped toolbar, and a 3D interface.
- Easily adjust the type of interpolation curve and sampling rate between keyframes.
- Supports opening and saving animation clips, and integrating multiple clips into one complete animation
Our animator sits a top an inverse kinematics engine ultimately represented in the form of a four legged spider. The engine, which performs single intermediate joint inverse kinematics, was crafted using our own developed techniques to transform the information about the foot and hip to a more usable coordinate system so that possible positions of the intermediate joint could be computed. The possible positions are then compared to the body to decide the most realistic position of the intermediate knee. The engine also allows for anchoring of the individual appendages to either the body (so that they move with the body) or to the world (they stay stationary in the world). To assist in the mathematics computation of the inverse kinematics we developed an overloaded C++ arithmetic library for vector and point computations.
In addition to the calculation of the intermediate knee joints, the skeleton makes sure to maintain the lengths and placements of various body parts, for example, physical constraint checks are made to ensure that the feet can only be moved to maximum and minimum ranges allowable by the appendages. Additional constraints were also added to check the skeleton against the given plane. The inverse kinematics engine also allowed for the rotation of the central body (while taking into account the anchored status of the feet) about an arbitrary axis to allow for twisting motion; however, this feature was not fully realized in the user interface to be controlled in real time by users.
In addition to the IK engine, we coded several path interpolation regressions and sampling methods to provide users of the program a plethora of options to create more realistic animations. Users are given four different interpolation models to create paths: linear, parabolic, cycloid, and bézier. The interpolation methods have been generalized to all be based off of three points (because that is how animation clips are structured) passed in from the skeletons depending on how they have been posed. Once an interpolation path between points has been selected, the animator also provides for five different ways to sample the given path: linear sampling across the motion path, fast at the start of the path, fast at end, fast in middle, and slow in middle. The user can choose any combination of paths and samplings to be bound to each of the controllable parts of the skeleton.
To hold the animation information, data structures and API's were written to hold animation clips so that they could be easily accessed and modified by the User Interface code. In addition to holding the actual frames, the associated information necessary to generate the clips is also stored to cut down on file sizes and memory usage. The associated information of a clip also allows for easy modifications of animations by the user (without needing long computation time to recompute the animation) and changing of the framecount for a given animation. The animation clips createable by the user are determined by two keyframe skeleton at either end as well as a control skeleton in the middle used to relay information about the path to take between the two keyfame skeletons (thus explaining why the afforementioned paths are each determined by three points). Multiple clips over the same time period are layered (so that multiple body parts including the main body can move simultaneously) to form more realistic motions.
Once a user has created an animation clip, it can be saved to a file by the user. Multiple clips files can then be loaded from files to form a longer animation involving more complex body actions.
— Derek Chan and Danny Krause, Inverse Kinematics Engine
User Interface via the wxWidgets API
ABSTRACT. In order to manipulate our model, we used an OpenGL interface to provide us with real-time feedback on our mathematical models. The OpenGL layer sits inside a canvas painted by wxWidgets, a cross-platform open-source application-program interface. We needed a cross-platform windowing library because half of our group uses the Mac OS X platform while the other half uses the Windows platform. We learned a new language (C++), the wxWidgets API, the OpenGL API, and how user-friendly interfaces are structured and implemented both in 2D and 3D.
DETAILS. Getting wxWidgets to work was not a trivial task because, while relatively stable, support for the API is spotty (I suppose that is the nature of open source). Although it came packaged with a fair amount of sample applications and code, getting it to run on either platform required about twenty hours of piecing together scattered newsgroup posts and scouring Wikis so that we could compile the library with makefiles, configure and make. What flags should you use while compiling? Do you want it in debug mode? If so, should it contain debugging information for GDB or wxDebugContext? It's like peanut butter: would you like crunchy, extra crunchy, smooth, or low fat? Suffice to say, compiling it was an overwhelming task.
The next overwhelming task was creating a solid user interface for the program. The user interface was to be as intuitive and professional as possible, and mimic the likes of great applications such as Photoshop and Maya. The user should be presented with one main task window in which they can manipulate the skeleton, with a toolbar containing camera and model manipulation tools. Along this main window, a Timeline Inspector (a miniature window) would allow the user to go back and forth in time and preview their animation. To the right, a Property Inspector would allow the user to fine-tune details in the animation, such as interpolation types between the frames.
Our first challenge: coughing up a framework. In previous computer science courses, we worked with a clean, fill-in-the-blanks skeleton code (and, for many of our group members, CS 184 was our first upper-division CS course). This time, we had to hot-wire an object-oriented system in which objects talked to each other. It gets complicated with multiple windows: our application contains a window which contains a toolbar, an OpenGL canvas, and two more inspector windows. Our application was inspired by the Model-Controller-View (MCV) user interface style, but, for the most part, our program integrated the Controller and View, which together, accessed the skeleton model.
And what happens when the user punches a button or waves their mouse? This is not simple to answer (and, again, relates to the aforementioned rhetorical Peanut Butter questions). wxWidgets uses an event system that propagates events up from children to parent. We'll try to clarify this: for example, the user has a slow computer and wants to turn off our real-time visual effects. They click on "Fog." This generates an event in the checkbox, which passes it to the Property Inspector, which passes it to the main window, which then tells the OpenGL canvas to turn fog effects off. Key and mouse events are generated in a similar fashion.
As far as intuitive goes, creating user-friendly interfaces is not trivial either. The user can choose from a cornucopia of options in the Property Inspector. They can load an animation, or a sequence of animations, using a modal file dialog box. The user can "scrub" their animation, which changes the time. And, the user has the power to manipulate their skeleton with the (1) mouse through menu bars, buttons, tick boxes, and other such widgets, and the (2) keyboard, with Photoshop-like hotkeys and menu shortcuts. Arranging all of the widget elements in their windows required learning about Sizers, which accurately positioned widgets even when resized. This contrasts with most other CS projects we've been accustomed to, which are the complete opposite of user-friendliness: a truckload of command-line options, a twenty-six keyboard commands to memorize, and maybe several man pages for your reading pleasure.
— Steven Chan, User Interface+Graphics
Graphics via OpenGL
We used OpenGL to provide real-time rendering for our IK Editor. To provide for 3D object selection, we initially used the OpenGL-provided method of viewing volumes that followed the location of the mouse. From a previously created selection buffer, objects within the viewing volume are rerendered and saved to a "hit" buffer. However, we found this method of object selection to be inaccurate, so we decided to re-code a new selection process. Instead, objects are chosen by casting a ray from the mouse's screen position into the world. From the given perspective view, we used dot products, to determine which objects in the scene were intersected. Then, comparing the calculated dot products, the object closest to the viewer selected. Once the selection through raycasting was fully implemented, the skeleton's feet could then be translated according to mouse movements.
Upon the selection of an object, we drew a widget to the screen to indicate the three axis and provide easy manipulation of joints within the scene. The raycasting technique was again applied to allow the user to select the axis to translate against.
Camera movements (zooming, tracking, dollying) were added to allow the user to freely navigate within the scene. We drew the skeleton and widgets using cylinders, spheres, and cones. The drawing of cylinders and cones between points required translation and rotation to the z-axis, where gluCylinder would then draw the shapes.
For aesthetics and to make the model more tangible, lighting and fog were used along a checkerboard plane to create the appearance of depth and measurement. With the lighting, shadows and specular reflection off the shapes gave a more realistic effect to the scene. OpenGL does not automatically generate real-time shadows, so we manually implemented this feature by creating a transformation matrix to redraw the body flat on the ground plane according to the light's location. To draw a soft shadow, we recursively redrew the body and blended them together.
Although rendering has been relegated to a mere 10 marks, our work into OpenGL complements the user interface in many ways. For instance, the shadows and 3D shapes provide depth to the model so that the user may know where their model is moving, as opposed to a mere wireframe, which lacks depth.
— Jennifer Tsang, User Interface+Graphics