CGI

How does CGI work? by James Whitaker

People often ask me how does CGI work, or how do I create my images and I've always felt like I wasn't giving a particularly good answer so I've made this short video as an introduction for the uninitiated.

If you like the video be sure to follow us on Facebook - facebook.com/WhitakerStudio/
and Instagram - instagram.com/whitaker_studio/

And signup for our newsletter to receive a 3D model of the cup - whitakerstudio.co.uk/how-does-cgi-work-newsletter-signup

A free piece of 3D software for you to play with is Sketchup - sketchup.com/

Space Baby by James Whitaker

So, last week I finally finished my series of images Space Baby, and they are now selling through Getty Images. I've been chipping away at them between commissions and they've been great fun to create, so I thought you might enjoy a little break down of what went into making them.

Each shot was sketched out before hand. Sometimes this was a quick pencil sketch and sometimes a very dodgy bit of photoshop, as below. As these were never intended for public consumption I didn't worry about spending too much time on them, they were really just for blocking out the principals of the shot - the camera angle, lighting etc.

We did all of the photography work at the brilliant Cloud & Horse studios in north London, and with a rough idea of the shot that we were after we could set up the studio to give us the right lighting conditions. However, unlike when I was shooting Jennifer for the Tokyo penthouse we couldn't be that precise - a toddler just doesn't stand still that long. We had to be far more fluid and adaptable.

Back at the computer I was able to make my selects from the shoot and begin the hard work.

The first step was to place the photograph of Jack behind the model in 3DS Max. I could then move the virtual camera into the right position so that model and photo would align. With the camera setup and locked in position I could then work on the 3D model, adding detail where it would be seen without wasting time on stuff that would be out of shot.

With all the shot modelled up I then went back to Photoshop to mask out the background in the photograph of Jack. In the instance of this photo I had to use 2 or 3 photographs and composite them together to remove any trace of gravity and me holding Jack.

There was then a little back and forth to double check the camera angle and lighting in the model before rendering out the final image of the spaceship. This was then taken into Photoshop where it could be combined with the photograph of Jack.

I render out my computer images as 32-bit images in multiple passes so that I can adjust each element (reflections, refractions, lighting, etc.) in Photoshop. This is particularly important on images like this to bed Jack into the image and adjust the little details that make him look like he's actually there rather than just pasted on top. 

Every detail was carefully embellished to give the series a comfortable real world feel. For instance the magnetic letters were modelled from scratch and then textured with a Vray SSS2 material. This material allowed me to give the plastic that slightly translucent feel that you get on some children's toys, which then glows a little when it's back lit.

Jack's spacesuit brought some unique challenges, particularly for someone who specialises in architectural imagery (which is normally hard and rectilinear). In the end I took a model of a small boy and tweaked it to give him the proportions of a toddler. I then rigged the model with a skeleton in 3DS Max and exported that to Marvellous Designer. In Marvellous Designer I could then use the toddler model like a mannequin and slowly craft the spacesuit over him. With the fabric parts of the spacesuit finished in Marvellous Designer I then animated the mannequin back in 3DS Max to move from standing to the pose for each shot. I then took that animated mannequin back to Marvellous Designer and animated it while it wore the spacesuit. The spacesuit was then finally taken back to 3DS Max for texturing, to add the hard elements, and then finally adding it to the rest of the scene.

If you have any questions feel free to ask them below and I'll do my best to answer them for you.

Brass Reading Light by James Whitaker

I made this model just before Christmas for an image I was working on and then revisited it last week to get it ready for selling online. While getting it ready for sale I made this little film of it which I find oddly enjoyable and hypnotic if watched on repeat. So here it is for your enjoyment too!

Brass Reading Light 3D model

Brass Reading Light 3D model

You can purchase the model on turbosquid

Jack the Astronaut by James Whitaker

For a while now I've had this daft idea of sending Jack into space. I thought it would be fun to imagine what life is like for a toddler travelling through the cosmos, maybe on route to populate Mars? And Jack would look pretty dinky as an astronaut. So a couple of weeks ago we spent a day in a studio with Jack, executing a carefully planned shoot. The CGI component of the images is nearly finished now and ahead of the final images being released I thought you might enjoy this behind the scenes shot.

Utopia doesn't exist by James Whitaker

I have had a bit of an obsession with Thomas More's Utopia for years now. The text is wonderful, and parts of it have a real poignancy in the modern world. More wrote the book back in 1516 and, for anyone who hasn't read it, it largely comprises of a conversation between More and a traveller, Raphael Hythloday, who has just returned from the island of Utopia.

Almost every made up name in the book is a pun or reference, so Hythloday is a Greek compound meaning expert in nonsense and Utopia is derived from the Greek prefix ou-, meaning not, and topos, meaning place. No-place or nowhere. Utopia doesn't exist. And so computer generated imagery feels like the perfect vehicle to explore Utopia, and 2016 felt like an apt time to start this series as a means of reflecting on the political atmosphere in the UK and Europe.

The image above is the first from the series and is based on the portion of text copied below: 

...entry into the bay, occasioned by rocks on the one hand and shallows on the other, is very dangerous. In the middle of it there is one single rock which appears above water, and may, therefore, easily be avoided; and on the top of it there is a tower, in which a garrison is kept; the other rocks lie under water, and are very dangerous. The channel is known only to the natives; so that if any stranger should enter into the bay without one of their pilots he would run great danger of shipwreck.

It is my intention to finally exhibit the series, although it will take a little while to reach that point.

How to put your 360 VR Renders on Facebook by James Whitaker

Since experimenting with 360 videos last week, Facebook have now introduced 360 photographs and it turns out it is incredibly easy to produce and upload cg images, although it does require you to go through a couple of steps. So here is a little tutorial for anyone needing to create 360 content.

We use 3ds Max and V-Ray here at WS so the first part of this tutorial will describe the settings specific to our pipeline, but can be translated to other renderers. The second part will look at what you need to do to prepare your rendered image for upload to Facebook, regardless of software used.

Stage 1 - Render Settings

Set up your camera as you normally would.

Render settings are largely the same as your normal preferences, however under V-Ray > Camera select Spherical Panorama for type and check Override FOV, entering 360.0 for the horizontal override and 180.0 for the vertical override.

Finally, Facebook needs your image to be in a 2:1 ratio with the maximum recommended dimensions being 6000 wide by 3000 high.

Now you can hit render! We normally save out as 32-bit EXR with render passes and you can still do this, editing your image in Photoshop as you see fit before saving out as a jpg.

360 Image for Facbook created with 3DS Max and Vray

Stage 2 - EXIF editing

With your image rendered you now just need to add some additional information into the EXIF data so that Facebook interprets it as a 360 image rather than a normal flat 2D image.

For this you need to visit theexifer.net. Upload your image and then click eXif.me. Here we need to enter Ricoh for Camera Make and Ricoh Theta S for Camera Make. This will fool Facebook into believing that the image was taken with a recognised 360 camera.

Now you can download your photo with its corrected EXIF data and upload it to Facebook.

Further Reading

For anyone wanting some further reading here are a couple of helpful links:

Editing 360 Photos & Injecting Metadata - Facbook

Chaos Group guide to VR

Hechingen Film by James Whitaker

I'm delighted to present a short film that I've been working on recently - Hechingen Studio

Hope you enjoy it!

Someone once said to me that if you are going to make a jeans company concentrate on making a really good pair of jeans before you start selling t-shirts. I thought it was pretty good advice so I've been concentrating on stills up to now, making sure that they are as seductive and polished as can be. However when I was an undergraduate at uni I was a bit of a geek and used to teach animation on the post-graduate course. A couple of years ago I made a short film for fun with my brother and a bunch of friends and it went on to win a film festival in Canada. So we know a bit about making nice films. We approached this film in the same fashion as we would approach a live footage piece, working up a storyboard and then animatic, before editing and fine-tuning the shots, then we worked up the animation to what you see above.

For the geeky amongst you all animation and modelling was done in 3DS Max, rendering with VRay and post-production in Adobe. Rebus Farm was used to outsource some of the number crunching and the music was found on Musicbed. If you'd like to know any more feel free to ask in the comments section below.

You can read more about the building in this article on Dezeen.

Green Screen Trickery by James Whitaker

At the start of September I spent some time in Cloud & Horse’s photo studios shooting with actress Jennifer Dawn-Williams. The shoot had been meticulously planned to allow the resulting photographs of Jennifer to be blended seamlessly with computer generated images of a penthouse apartment in Tokyo.

With the photographs taken they were then composited in Photoshop to achieve the stills that I wanted and this video shows a quick breakdown of that process.

The complete set of images can be seen on my website.

Architectural Rendering meets Photography Shoot by James Whitaker

tumblr_nwnzfdUwJG1qa32ezo2_1280.jpg
tumblr_nwnzfdUwJG1qa32ezo1_1280.jpg
tumblr_nwnzfdUwJG1qa32ezo3_1280.jpg

Here are 3 brand new images of the Tokyo penthouse that I was working on earlier in the summer. You can see the complete set of images on my website here.

At the start of September I spent some time in Cloud & Horse’s studios shooting with actress Jennifer Dawn-Williams. The shoot had been meticulously planned to allow the resulting photographs of Jennifer to be blended seamlessly with computer generated images of the apartment.

I will try to post soon a short animation of the post production layering of the images.

Dice House by Sybarite by James Whitaker

tumblr_nkixzytVSX1qa32ezo3_1280.jpg
tumblr_nkixzytVSX1qa32ezo1_1280.jpg
tumblr_nkixzytVSX1qa32ezo2_1280.jpg

In 2014 I produced this series of images for Sybarite Architects of their project The Dice House. Each house has utility or retail space at its base followed by floors of living space which are capped with a roof garden covered with solar panels.

For the technically curious amongst you, these images were produced with Rhino and Maxwell, rather than 3DS Max and VRay. The hay field was produced using Maxwell’s grass extension and there is only a modest amount of post-production on any of the images.