This is the extended interview notes with Craig Hayes, visual effects supervisor from Tippett Studios, responsible for the creation of the amazing Machine City environments in the two Matrix sequels, Matrix Reloaded, and Matrix Revolutions. Hayes outlines some of the ingenious methods used to create the organo-tech futurescape, that goes beyond raw computing power.
Matt Hanson: On a project as massive as the last two Matrix films I’m sure it’s a question of where to begin, so what was the initial kernel for you?
Craig Hayes: We wanted to create something unique. Something that was not duplicating a real city. To do this we had to explain different processes to people, as the visualization was not thoroughly fleshed out, but the seeds were present. We decided that the way to create the city was for the effects to be procedurally driven. Patterns would emerge. We had the freedom to engineer a system, and custom design something to do that.
You mentioned the seeds, can you give me some idea of the things you had to work with?
Key art from George Hall [color interpretations] and Geof Darrow [conceptual]. We had vistas to work with, but needed to engineer three dimensions and depth. So we thought about having key buildings and ‘hero towers’. There was a massive amount of information generated, so we wanted to break it down. We defined tubes and connectors, and found real city analogies. Suburbs and skyscrapers. We looked at other categories like water mains and made components for all of these things.
I like the fact elements move into a production shorthand and internal jargon so quickly. Can you go into more detail on these ‘hero towers’?
With the hero towers we broke them down into stylistic components. For example the Darrow towers which came from the engine had a lobster-like top. Everything was modular so it became easier to create.
In parallel we had a team defining processes and visualizations. The job created multiple challenges. Things had to work in a normal lighting environment and also in ‘Neo-vision’. This neovision, like an x-ray of the energy of structures, had to be built into all the modelling.
We used Renderman so we could prepackage the geometries. It allowed us to define three to four different resolutions of buildings, stepping up from four polygons for those in the distance to so many polygons in a wireframe render you couldn’t see through it. The program also built in skeletons for the internal ‘Neo-vision’ structures.
Can you give me a breakdown of the software you used to do this?
We created particle generation routines using Renderman and dynamics in Maya, then it was a case of tweaking and modifying what was ‘grown’. We also used Studio Paint, Photoshop and Shake.
The blocking and layout was already present from the storyboards but it needed fleshing out. More color or Magma. This was originally articulated as blue sparks. We liked the informal activity and electrics, the fractal patterns. To give them more depth we added photographic elements to these. We used soap and aluminum powder and flakes in water to suggest movement and light, and added these to the computer generated models to make them more organic. It was fun to add this stuff to the CGI, so for example, I sent my PA out to get a lobster and spray painted it silver to study surface qualities as a reference.
What other references did you use in creating this totally computer-generated moviescape to give it some grounding in reality, some familiarity?
One of the things we did, being based in San Francisco Bay, is we went eight miles out, away from the city and looked back to see how much was visible. That helped give us a cue for our animations.
The buildings that we generated equated to being 2500 feet tall. Actually, the first iterations were miles tall, but we drew this back as they looked too unreal on-screen, even though the custom program grew them from feasible rules. Keeping the height under control created a nicer aesthetic.
I used a friends Tesla coil and filmed it with a high-definition camera, to see all this electricity shooting off steel and rigging. We used that as reference for the 3D sparks on the harvest fields.
For the Neo-vision we looked at how jellyfish moved underwater, in an acqueous environment. It was the idea of looking at what was not normally viewable. From this idea came the thought about nervous impulses of energy being like tenticles.
How long did you it take to create this mammoth environment?
The ‘Scorched Earth’ comprised 180 shots, over a huge environment. It took 18 months with a team of up to 100 at the most intense points of production.
*This is an edited version of the full interview, altered for clarity and readability, which is intended to reflect the true views of the interviewee in a fair and concise manner