This is a summary of what took place at the festival and how I feel it fits in with what we are trying to do at the CVF. I will discuss each session in turn as well as the Technology Labs and vendor displays. Rather than discussing the films themselves, i will focus on the technologies and trends that emerged. All the comments herein are my own opinions of the festival. This article is intended for my colleagues in Advanced Research Computing but may interest others as well.
The introduction to the festival was unfortunately a little glib. It was presented by the MC and was little more than housekeeping and and introduction to the speakers. I would have liked a 15 minute presentation by a keynote speaker to place the rest of the festival in context. I'm not sure everyone would agree with this as I'm sure most people wanted to just jump right in to the nitty gritty detail. I found this a little abrupt but it probably is just a matter of choice. One thing I did like was the snappy little animation that got everyone's attention and told them to sit down and shut up without actually saying it. We have already discussed this as a useful device for OzViz III, and it seemed to work well for AEAF.
Definitely the most anticipated of the sessions (as the film was only days away from release), this was an extraordinary look behind the development of several key scenes in the film. This was the first session to point out just how much Maya is used in the high-end of the graphics industry. In fact, Houdini was not mentioned at all, though I'm sure it was used extensively as well. I mention Houdini because it was billed as the main piece of effects software on the first film. As the various VFX shots are handled by different production houses, it may be that we were simply seeing a Maya house in action. Other software such as SoftImage and RenderMan were mentioned, but only in passing.
Scanning played a huge part in the film. Everything from scanning textures to humans to locations. The human scanning was absolutely extraordinary. An extremely high resolution scan of the actor is taken and then a plaster cast is made of the head. This plaster cast head is then taken and an even higher (and much slower) scan is taken (25 microns) which is then used as a bump map from the surface. High resolution images of the head from numerous angles are then stitched painstakingly together to generate a texture map. When all these are combined, the result is so accurate that a camera can perform an extreme close-up with out revealing that it is a CG character. Additional touches such a sub-surface scattering for light allows for the slight translucent glow seen through thin tissue such as ears, nostrils and cheeks, provide an incredible sense of reality. This is combined with amazing hair simulation software that produces strand-perfect controllable (and dynamically accurate) hair. The result is the scene with a hundred Agent Smiths, all CG.
Motion capture played a very important part in the controlling of the CG characters. The actors themselves performed the fight scenes and were tracked using very accurate motion capture rigs. Oddly enough, it was the facial animation that was the most difficult. Five or so cameras were filming the actors perform a vast range of facial expressions, according to what the directors felt the character might be doing or thinking at any given time during the film. This was analysed and motion tracked to produce morph targets for the CG character. An amazing 5 terabytes of data was produced each day during this process, which took several weeks.
A process called LIDAR scanning (LIght Detection And Ranging) was used to scan the locations were a virtual camera would be needed. The result was a virtual location that was extremely accurate, using textures and dimensions taken directly from the real world site. This allows for the creation of a virtual camera that can fly about in its virtual world, perform moves that no real camera could do, yet still producing footage that is otherwise undetectably different.
What I came away with from this session (apart from amazement at the extent to which they went for the perfect shot) was just how important accuracy was to the production house. These people really push the technology to its absolute limits and when it breaks, they make new technology to achieve their goal. In many ways, they are miles ahead of any academic research because they are driven by hard deadlines and fierce competition instead of curiosity.
Practically, Maya is definitely the most used piece of high-end graphics software. I also like to have a closer look into the LIDAR scanning as I can see numerous research applications, particularly in areas such as real-time visualization of architectural sites and GIS. Motion capture of both the body and the face is another area that would I think would prove a valuable area of research, particularly in the area of virtualization of performance. Such performances could be evaluated for physiology as well as dance.
This session was a little light on content but we did get to see a fantastic collection of VFX vignettes produced by ILM since it began. No other production house has had such a significant impact on the special FX industry. It seems that just about every Hollywood blockbuster has had an ILM thumb in its pie at some point along its production line. While the nineties saw the birth a many new VFX houses, ILM still stands strong. After our little trip along VFX memory lane, Cliff discussed how ILM has developed over the years. To begin with, they pretty much had to write all their own software to do what they wanted. Then, they started striking up partnerships with software companies, often because their own staff left to join the software houses. Now, while software production is still an important process at ILM, they always try and purchase off-the-shelf software, and once again, Maya is their tool of choice. They use pretty much all the major applications, from 3DSMAX and Lightwave through to Maya, Houdini and SoftImage XSI. They do insist on using their own file format and accordingly they have either written, or had the software company write, plugins to support their proprietary format.
Because ILM works with production houses all over the world, they typically require their own VFX Supervisor on the set of each film they are involved in. This is where Chris does most of his work. He has built the underlying network infrastructure that allows remote sights to tap into the ILM network and share resources. Typically, between major cities, they use fibre optic networks. All the VXF supervisors have high-end PC laptops with most of the previsualization tools used by ILM. An interesting development over the last few years is the use of game engines such as Quake or Unreal in the previsualization af a scene. Apparently Steven Spielberg wanted to see how the action he was filming would fit in with the VFX so ILM hooked up Unreal to a motion capture suit that was run from a laptop and could show on location and in real-time what the performance would look like.
Apart from the networking and technology challenges that ILM (and everyone else) face, Chris talked a little about how people get into the industry. It seems that there are few schools around that really prepare students for the kind of work they might do at ILM, beyond the basic 3D modelling and animation stuff. While these things are very important, VFX uses so many different tools in concert, that schools need to teach these things or risk becoming irrelevant.
ILM uses almost every tool available but they still have their favourites. Maya still stands out as the high-end 3D graphics package, especially for cloth and hair simulation. Shake, the recently purchased software from Apple, is now the favoured compositing application, though Combustion and Flame Inferno (Discreet) are still used.
This session was split in half and was effectively two sessions. Because both were software sellers, it is a little difficult to seperate the content from the pitch but here goes.
David showed how the Vicon system is used in many different ways, all of which we had seen before. Computer games, Hollywood films, phyio analysis and the like. A live demonstration was being held as part of the Technology Labs in which as hired dancer perform live, while on the plasma screens behind her, we could see the bone animation from the tracked data, as well as two low res CG characters dancing as well. The system performance was such that the lag between her movement and the characters' corresponding movement was barely noticeable. The system was far from portable and yet was not terribly intrusive. A space using such a motion capture rig (optical) could easily be used simultaneously as a studio space. The motion capture sensors don't interfer with standard lights. The results are extremely good, and every nuance of the performance was capture accurately. The process of then using the data was relatively trivial, especialy importing into applications like Character Studio for 3DSMAX. There wasn't much else in the presentation other than who is using it and how happy they are, so I'll just move on.
Ed Bolton for 2D3 showed the latest instalment of Boujou. This software is absolutely incredible. Video or film footage is analysed by the software to produce a virtual camera. While this may sound fairly straight forward, it isn't. If I have video footage, it is recorded entirely in 2D. Using this footage, Boujou 2.1 is able to track thousands of points of detail (if they exist) to produce a virtual camera with the same lens focal length and path it took in 3D space, automatically. From my video footage, I can then easily include a 3D model into the scene and have it look like it was really there. The software exports directly to almost all the major software applications such as 3DSMAX, Maya, Lightwave and so on. For each of the demonstrations Boujou worked perfectly, though Ed explained that that only happens about 90-95% of the time. Sometimes tweaking may be necessay as detail may be lost or tracked inappropriately, like scratches on film, film grain, dropped frames or excessive foreground motion. Camera shake (eg. hand-held) is not a problem.
Both these tools could be well used in research in the CVF. I have already explained the main benefits of motion capture, though I am not convinced such a system as offered by Vicon would suit us. I still think a Gypsy (like they have at RMIT) would be better. Camera matching software, particularly Boujou 2.1, seems like an extremely useful application. Architecture, urban planning, archeology and GIS are the most obvious uses for camera matching and I think we could make very good use of it.
The world or commercial animation is a little different to what we had been shown so far. With commercials, rapid turn around and a "near-enough-is-good-enough" approach is almost always adopted. Glenn took us through several commercials that Iloura have produced, focusing on the Monaro Game Over commercial and the Ants peanut butter commercial. Iloura use 3DSMAX mostly, but still use whatever they need to get the job done. As a small company, the employees tend to be multi skilled. While places like ILM tend to have specialists, Iloura will use staff in particular areas but will often require them to cross over to other areas of expertise.
Aside from the tips and tricks Glenn and his team use to "fake" the effects seen in Hollywood blockbusters, he gave some very useful advice about how to get into the industry. Essentially the thing that will get you hired is your showreel. No spaceships exploding or women in leather fighting gear, a showreel should contain real things. The idea is if you can make it look real, then you can make it look any way you want.
This session didn't really focus on the technologies used, but rather promoted an overall approach to VFX and 3D animation. I guess not a whole lot is directly relevent to the CVF, except a few of the suggestions for achieving certain effects without going to all the trouble of a big production house.
I have to say that this session really didn't grab me. The standard tools were used in pretty much the standard way. The process of taking a commercially viable live action group and converting them to 3D, in such a way as to not frighten the target audience, was mildly interesting. Simon discussed how the various characters were developed and how the actual Wiggles contributed to the project.
In some ways, this session seemed a little out of place, as New Kelly is a period drama, and not a VFX film. However, VFX influences most feature films these days, just not in ways most people might expect. So why would Ned Kelly need VFX? The simple fact is the world Ned Kelly lived in is no longer, and it would be far too expensive to try and recreate it. So filmmakers find locations that come close to what they need, and then "fix it in post".
While most of the sessions touched on the compositing/rotoscoping technique, this session dealt with it in more detail, so I've left discussion about it until now. The process of rotoscoping is the creation of an animated alpha channel which is used to "lift" certain elements from a scene. For example, in "Ned Kelly", a street scene was shot, with people in period costume, dirt covering the roads, horses and carriages and so on. The camera swoops down from on high to close in on a wanted poster of Ned Kelly, posted on a billboard on the side of a cart. The scene looked as though it came straight from the eighteen hundreds, except for the traffic lights. The traffic lights had to be removed and the scenery that was occluded by them, replaced. Numerous times throughout the film this technique was used. A train platform was found for one scene, but an entire station had to be built around it. Again, rather than actual build the station, VFX came to the rescue. In another scene, a policeman is shot by Ned. Unfortunately, the small exposive attached to the actor's eye for the effect, didn't produce quite the look they were after. So the replaced the actor's eye in post production, and created the effect again, to meet the filmmakers expectations.
We have, over the years, seen many "making of" specials and have seen the familiar blue or green screen that allows an actor to be placed in literally any environment imaginable. This method is still widely used, but in certain circumstances, rotoscoping achieves the effect even better. This is especially true if mistakes creep into the footage that are not spotted on the day.
Another key aspect to VFX in movies, including Ned Kelly, is the idea of set extension. Rather than build a small town, Complete Post used an existing town and simply replaced the streets end with open country side. This is done using camera matching and 3D sets to allow the digital footage to be integrated into the real footage. Foreground elements like people and horses, etc, moving about are rotoscoped back into the scene later. Another scene in Ned Kelly required an troop of policemen to ride into a town under Ned's watchful eye. The actual town did not exist and was constructed in 3D and composited over the scene, complete with people milling about. Careful lighting and the result is seamless.
Compositing, rotoscoping, set extension and matte painting (CG generated backgrounds) have become mainstays of the VFX industry. Buildings can be added and removed, raised and lowered to meet the filmmaker's needs. While these things probably don't have an immediate need in the CVF, they still demonstrate just how the industry is changing and becoming more and more reliant on technology, and this is a point of research.
While this session was pretty much just a sales pitch, I must say that it was pretty effective. Pinnacle Edition is clearly a very powerful and apparently user-friendly non-linear digital video editing solution. But what makes this solution a little more than its competitiors is the fact that it can create and burn DVDs right inside the application. No jumping out to some other product. Everything you need is right there, and can be saved as an individual project. I should point out that this software has totally automatic saving. which I am not entirely sure about. It seemed, from the presenter's description, to keep track of all changes that occur during the project's life, any of which can be undone, or more accurately, countered, as the undoing can also be undone.
From an editing point of view, I can't remember in stand out features, though it did seem to match up with its competitors quite well. However it does seem that a few interface differences might prove interesting. Probably the most interesting aspect of Pinnacle Edition, other than its ability to be an end-to-end solution, is the real-time effects. This software is the first I have seen to provide two diiferent types of FXs and transitions, specifically provided by the CPU or the GPU. For instance, a fade might be provided by the CPU or the GPU. This is most useful when you start adding transitions and effects together. In order to achieve the largest number of real-time effects, you can choose whether a certain effect will be provided by the CPU or GPU. An example might be a colour correction over a clip, plus a lens effect, plus a dissolve. Now the colour correction could be handled by the CPU and the GPU takes on the lens effect and the dissolve. This all happens in real time (depending on your CPU and GPU power, and available RAM - most new systems can handle several real-time FX using this method). While it may not seem a big deal if you only do basic editing with a few little fades here and there, even that can get a little dull waiting for a scene to render. Also, try adding text and see how long it takes.
I'm not sure how much immediate benefit we would get from having the software in the CVF as we don't do a lot of video editing. Oddly enough, when I was preparing material for OzViz 02 and Graphite, I could have made great use of Pinnacle Edition and I would have saved considerable editing time. I think that if we were considering a new editing package for the PC, Pinnacle Edition would be a far better choice than Premiere. I think Final Cut Pro is still pretty good for the Mac.
This session focused mostly on the development of the Gollum character, which was entirely CG in the final film. While most of the technology used had already been discussed in the previous sessions, it did handle a few things a little differently. Gollum was actually performed by a live actor and this actor was later removed from every shot digitally. This required an extraordinary amount of rotoscoping and matte painting. In some scenes, Gollum literaly rolled around on the ground with the hobbits and the actor had to be removed and the hobbits clothing, body parts and the background repainted on every frame. There are some tools to handle the larger part of this (though not discussed in detail) but the finer detail was done by hand. No wonder it took so long to develop.
Gollum was created in a similar way to Neo in the Matrix, with a similar creation process, except that a clay model was created and scanned. Obviously all morph targets for the face had to be created manually and in the end, Weta designed their own facial animation system, with Gollum having over 900 morph targets. A lot of effort also went into the larger body motion of Gollum. Rather than MoCap everything, in many scenes, the actors performance was used as a reference and traditional key framing was used. That is not to say MoCap was not used, just not as much as one might expect.
After discussing the creating of Gollum, we were also shown some of the other things Weta worked on. The warg riders scene showed just how much compositing played a part in a scene, and how frequently the performances of the actors are adjusted to meet the filmmakers vision. In one part, Aragorn is being dragged by a warg, but his legs flailed about too much, so the lower half of his body was replaced with CG legs. In another shot, the real Legolas is seamlessly replaced by the CG counterpart as he swings up onto a horse after shooting off several arrows.
Danny Deckchair is due for release later this year and looks like an amusing film, though probably a bit schmultzy for my taste. It has a sequence where Danny is flying through the air in his deckchair attached to numerous ballons. This sequence is a combination of CG elements a was quite smoothly done. Of most interest was how the VFX guys approached this on a lower budget than was available to The Matrix or Lord Of The Rings. It was acceptable to compromise the level of detail provided it didn't appear compromised, and anywhere they could, they tried to avoid using CG. This meant that they had to keep the camera at a reasonable distance from CG Danny so that you couldn't see that he wasn't real. Extensive use of bluescreening and photorealistic backdrops were also required. Because of the nature of the project, the "flying" Danny never came close to anything physical, until the end, which all happened very quickly, so the cheaper VFX came off pretty well, though you can probably spot some of the shots where it isn't perfect.
One of the more interesting uses of the technology was in previsualization. This is where a scene is worked out using animatics or dummy characters in a CG environment to get an idea of how a shot will play out. Where complex camera tracking was expected to be required, these shots were accurately previsualized so that the matching was as close as possible. They also used Boujou for tracking and so I'm not sure how much benefit the got from this method, though they did make a big deal of it.
I had never even heard of this pop group but apparently they are very popular. The session opened with the video clip and it showed what appeared to be the three girls from the group at a fancy dress party, walking around wearing nothing but underwear made of living butterflies. The effect was quite impressive. While there were numerous other VFX in the clip, this particular effect was the most interesting. Having butterflies flitting about covering the girls modesty might not seem like a difficult thing to do, given what we had seen in the previous sessions. All you would need to do is create z-depth matte and have a particle system swarm around. Arm movement and so on might need additional rotoscoping but that wouldn't be too difficult. What was most impressive if that the girls were not naked, or even in bikinis. The were actually wearing dresses made of butterflies.
In post production, it was decided that the butterfly dresses looked pretty stupid, and I can say that I agree. Initially the idea was to have these dresses and simply add a few flying CG butterflies. But the dresses were extremely unflattering and so VFX stepped in. Using 3DSMAX, new bodies were made for each of the girls, designed to match perfectly with their own. This was brushed over in the session as though it was something they do every day (and maybe they do) but the modelling was excellent to my mind. Anyway, the real trick came in connecting the bodies to the real heads and arms and legs. This was done painstakingly by hand in 3DSMAX and Combustion (I think), and the result was fantastic. Everything from running, dancing and jumping had this effect and it was seamless. It was a pity that some of the other effects didn't live up to this one.
The presenter for this session didn't give his name and it isn't in the program either so forgive me for not including it. Shake was recently purchased by Apple to accompany Final Cut Pro and DVD Studio Pro as part of its DV editing suite. While Final Cut Pro can do basic compositing, it doesn't come close to what Shake can do. Compositing is the layering together of seperate video tracks, with some means of determining how they interact. A simple example is chroma-keying, or blue/green screening. Using this method, footage shot with a green or blue background is composited over other footage. The blue or green colour is removed and the background track is allowed to show through in those places. However, to do this convincingly can be extremely difficult. Products like Final Cut Pro and Adobe Premiere can do this sort of compositing, but the results usually make it pretty obvious what has been done. Edges are a big problem for these packages as well as spill suppression and colour bleed. High-end compositing tools like Shake and Inferno or Combustion from Discreet deal with these sort of problems in a far more sophisticated manner, intelligently keying and applying mattes to footage to achieve the kind of results we expect in a feature film.
Shake is a powerful piece of software but is it better than its competitors? Probably not, as they all release updates every six to twelve months, so the lead time for any isn't that long. What is impressive about Shake is it is a good deal cheaper than most other comparable packages. At $US5000 for the OS X version, combined with a decent G4, this means you can get absolutely professional results for as little as 10% of the price of some of the bigger systems. The results speak for themselves. Shake is every bit a professional product and as Apple point out on their website for Shake, it has been used in some way in every Academy Award VFX film over the last six years.
This session looked really good in the program but it was a little lack-lustre on the day. Unfortunately the presenter was better at VFX than at explaining them. He managed to make some of the most exciting sequences in the Harry Potter film seem commonplace and dull. Maybe it was just that I was starting to find it a little hard to concentrate by this stage. Anyway, What we saw here was very much the same kinds of things as seen previously. Extensive compositing and rotoscoping, using Shake and Maya as the primary tools and laser scanning of the car and some of the characters. Nothing particularly new. We did get to see the whole development of the VFX shots handled by the company, from concept through to completion, but still nothing really gripping.
It was the middle of the afternoon and the previous session had left everyone a little flat, so it was no surprise when fully one third to a half of the attendees decided to stretch their legs during this session. It was on digital colour grading and some 3D work for a short feature for the ABC. I was just about to follow suit when things began to get interesting. I probably won't be able to do justice to the session because it is very hard to appreciate the result of the process without seeing just what they started with.
What I am talking about is video versus film. Most "films" are shot on film and processed in much the same way as they have always been. Along comes video, typically looked down upon by "film"makers who felt the results of video were too garish. Digital video and high resolution CCDs have gone a long way to improving the image quality of video, even achieving similar resolution but still lacked the sumptuous feel of 35mm film. Three years ago, George Lucas decided, amidst much controversy, that he would shoot the second Star Wars film, Attach of the Clones, entirely with digital video cameras. He believed and has shown that properly handled digital video (with sufficient resolution) can look exactly like film. Most of us don't have his resources though. An interesting aside, the Australian Film Commision, and other funding bodies like the ABC and Film Victoria, decided to fund digital video projects only a few years ago. The first of these started hitting the festivals in the last year or so and are now being screened on TV.
The Forest producers kept the budget down by choosing to shoot on digital video and engaged the services of The Swish Group to help them achieve the filmic look they required. I have worked on several short films and watched many more and it was refreshing to see that even with a professional DOP and cameraman, they still suffered horrible lighting conditions and produced footage that would have looked at home in an underground film festival but certainly not on braodcast television. This is where the digital colour grader stepped in. Using a range of tools, mostly from Discreet, he massaged the colour quality until it was consistent and had that filmic feel throughout. They showed a series of scenes with a split showing the original footage on one side and the final footage on the other. The contrast was amazing. It looked like is was shot on film.
The 3D component of the session was interesting but not much more than we had seen. Interestingly, it was the only one that used Lightwave for modelling, which was a little surprising. The result was quite acceptable, though not extraordinary. Overall, I expect the film will be interesting and most people will have absolutely no idea of what had to be done to this film to make it what it is. The great thing about DV is that it is now possible and the tools exist to achieve these results.
This production house operates in seperate states (NSW and SA) and it was interesting to see how well they were able to produce VFX in seperate locations that actually combined together seamlessly. The shot of interest involves a NASA Space Shuttle going of course during landing and having to land in the LA river. Typical American drama for the most part. The VFX Rising Sun worked on was how to make it look real, particularly the water simulation. The most interesting part of this session was the VFX were completed in several passes, including creating the water (LA river doesn't have much), the impact splash, the wake, the spray and of course, the shuttle itself. I expect I will be waiting for the movie to come out on DVD but it was interesting to see this process. Most of the other talks mentioned this process as well, and showed how they used it, but I felt this session showed it most clearly.
Let me start by saying I thoroughly enjoyed this conference, however, on a practical level, there were a few things that could have been done better. The sessions were quite long and this wasn't a problem except that there weren't any breaks scheduled except for lunch. Lunch was only half an hour and you had to go off site to get it. Essentially, with lunchtime crowds in the city both days, it took 25 minutes to go and get lunch and get back, leaving only five minutes to actually eat and relax. The venue was great (ACMI) but it would have been good to have the aircon up a little because people were finding it hard to stay awake late in the afternoon. I also felt that the space for the Technology Labs was too confined and became almost impossible to talk to the vendors. Those little gripes aside, it was still well worth it.
As far as the CVF is concerned, probably the most interesting thing to note is just how extensivley Maya is used. It really is the weapon of choice fo the high-end graphics community. 3DSMAX still stands out at the mid-level but I think we should look more closely at Maya. I should also mention that the Tech Lab had a demo of Maya running showing a complex fluid particle simulation which looked very good. Unfortuately, I didn't get an opportunity to ask any questions.
Techniques like camera matching and motion capture are also areas we know little about but are clearly important to the VFX industry. While the CVF is not part of the VFX industry, part of our brief is helping researchers present their data in the most effective way, and many of the methods shown during the AEAF would go a long way toward that.
Created: 22nd May, 2003
Last modified: 23rd May, 2003
Maintainer: Bernard Meade, Advanced Research Computing
Copyright © 2002 University of Melbourne