Creating Pixel Art in Blender

Motivation

Image from Kickstarter Presentation of Confederate Express

Image taken from Kickstarter Presentation of Confederate Express

 

 

 

 

 

Image published on OpenGameArt.org

Image published on OpenGameArt.org

 

 

 

 

 

 

There’s a type of retro-style graphics in games these days (see the images above). I like that style a lot – don’t know why exactly. However, such graphics are primarily created manually by artists. It’s a time-consuming process, especially if those graphics are animated. It would be awesome if we could create retro-style low-resolution graphics using renderings of 3D geometry using a software like Blender. This would allow us to create or use existing 3d graphics, animate that and then somehow extract a retro-style rendering from that.

How to do it

Let’s try to recreate a small artefact from a promo graphics of Confederate Express.

the graphic we try to recreate (taken from promo pic of Confederate Express)

the graphic we try to recreate (taken from promo pic of Confederate Express)

Here’s a mesh created in Blender, which resembles the bag from Confederate Express and then let’s render it.

bag mesh in Blender

bag mesh rendered in Blender

The problem now is, that this doesn’t look at all like the reference image. There are 2 important differences:

  1. The pixel resolution of our imitation is way too large
  2. The reference graphic has much less colors

Scaling down

All right, if that is the whole problem, just let us scale our rendering down. In the reference image, the bag has a size of 27×24 pixels (omitting the shadow beneath the bag). Blender allows us to render in any resolutation we want.

16 samples with bicubic smoothing

Now that does not look just like we intended it! It’s all smoothed out and boring, no contrast left. What’s the problem? Blender created this image by combining – for each pixel – 16 subpixels into one. It’s called oversampling at is used for anti-aliasing. In this case we used a cubis filter. Now let’s deactivate anti-aliasing and see what happens.

anti-aliasing deactivated

That’s much better but the image looks garbled. Take a look at the handle of the bag, it’s not very well approximated by such few pixels. So, here’s the deal: Without oversampling, we risk missing important details on the object. No oversampling means no subpixels means there are only 27 times 24 rays probing the object and if they miss an important detail, it won’t show in the rendering.

Blender provides us with other smoothing methods, showing less smoothed out results. For example, there’s „Catmull-Rom“.

16 samples with catmull-rom smoothing

16 samples with catmull-rom smoothing

The result is – compared to cubic – not bad, but both have a major issue: They’re using the alpha channel. The pixels on the silhouette are partially transparent. That is not an option for our retro-style graphics.

The anti-aliasing in these images can be understood as a method for downscaling the image. The subpixels can be seen as parts of a high-resolution rendering.

So what we want is some other method for downscaling our graphics, a method which is preserving or even amplifying small details in our image.

Reducing the palette

Besides downscaling the image, the second requirement to achieve retro-style graphics is to reduce the numbers of colors on the image. Early graphic card could only display 2, 4, 16 colors. A low number of colors is a typical feature of pixel art.

Color reduction is not the most important step in our case, because the reference image is composed of 44 individual colors, which is already quite large. Reducing the rendering with no anti-aliasing to 44 colors results in the following sprite:

palette reduced to 44 colors

palette reduced to 44 colors

Timothy Gerstner’s Pix

Timothy Gerstner wrote a master thesis „Pixelated Abstraction“ dealing with the very requirement I described above. On his webpage, you can find his thesis, 2 papers concerned with the same approach as well as executable code (which is of delightfully high quality!).

Gerstner’s approach does not only handle the downscaling but also the palette reduction.

Using his code I managed to create the following sprite

this version was created by Timothy Gerstner’s algorithm

Though the result does not resemble the reference very strongly I have to admit: it looks hand-crafted. But since Gertner’s algorithm has a couple of tweakable parameters and it is designed for semi-automatic execution, I might just have used an unfavourable set of parameters.

I have branched Gerstner’s code to add a command-line interface as well as to fix some minor platform-compatibility bugs in the code. This is currently WIP.

Other approached

  • Content-Adaptive Image Downscaling by Kopf et al. Looks promising, pseudocode for the whole algorithm is provided, no working implementation known to me. Is citing Gerstner’s paper.
  • Pixelating Vector Line Art by Inglis et al. They concentrate on downscaling vector graphics. Could be interesting for downscaling the emphasized silhouettes often found pixel art.

Reconstructing Ecstatica’s Level Geometry MK2

Amazing! As it turned out, what I expected to be the navigation- and collision mesh is actually almost the complete level geometry including walls, stairs, windows, etc. What’s missing is triangle-based geometry like roofs and windows as well as all the static ellipse-based models (flowers, grass, etc.). Nontheless this are great news, because the whole game world can now be explored and looked at in 3D!

Extracted Level Geometry from Ecstatica

Extracted Level Geometry from Ecstatica

Reconstructing Ecstatica’s Level Geometry

Alone in the Dark 3

Alone in the Dark 3

Ecstatica 1 takes place in a small village upon a rock, surrounded by a deep valley. The walkable area is restricted to this very rock, but still it is quite big and it is open. Not just a bunch of rooms but an exterior area. If you take a closer look at Ecstatica’s backgrounds, you will be puzzled by the really high-quality of the level geometry. Yes, there is real geometry behind the backgrounds. It even looks like not only the characters are all made of Ellipses but the walls, the floor and the plants as well. Those stone floors and walls look just like something you would use displacement mapping or metaballs for today. The Compare that to titles like „Alone in the Dark“ (see screenshot below) which were using painted environments. Alone in the Dark is quite a visually appealing title. But having actual geometry to render the backgrounds from is something different. Remember: When Ecstatica was shipped it was 1994!  How the hell did those Britons do this? This was not a multi-million dollar production. On what hardware were they able to create and render this vast amount of geometric data?

Screenshot demonstrating the high qualify of Ecstaticas level geometry

Screenshot demonstrating the high qualify of Ecstatica's level geometry

I recently worked on some code doing a reconstruction of the level geometry based on the set of about 240 background images. I managed to mix this reconstructed geometry with the collision- and navigation mesh. The current state already looks interesting. I propably won’t be able to explicitely reconstruct the complete geometry…

  • the available camera views are too sparsly distributed
  • the texture infomation I reconstruct is quickly diminishing in quality as you get farer away from the camera due to the projection effect
  • It would require a LOT of post-processing in order to merge and clean up the generated mesh data
  • It’s actually too much geometry. Given my 8 Gigs of RAM I’m unable to represent all 340 backgrounds in one blender session – even though I threw away around 99,5% of all generated polygons (using the „Decimate“ modifier).

How did they do it in 1994? My current assumptions are

  • They chopped the game world into chunks swallowable for 1994 hardware
  • For individual chunks they used some CAD software to create coarse geometry (walls, floors, stairs, etc.)
  • The used a custom raytracer to shoot rays through the world, querying the relevant geometry chunk and then, as a final step, they generated the ellipses on-the-fly using some form of texture mapping. I base this assumption on the fact that the stone walls show certain patterns. The basic principle of this last step actually is rather simple. Instead of calculating the ray intersection with a plane from the coarse geometry you put a set of ellipses there, each only described by its center and its radii. Because you’re systematically scanlining over the sensor area of your virtual camera, you should be able to keep only those ellipses in memory which are actually hit by rays.
Current state of my attempt to reconstruct Ecstatica's level geometry

Current state of my attempt to reconstruct Ecstatica's level geometry