Creating Pixel Art in Blender

Motivation

Image from Kickstarter Presentation of Confederate Express

Image taken from Kickstarter Presentation of Confederate Express

 

 

 

 

 

Image published on OpenGameArt.org

Image published on OpenGameArt.org

 

 

 

 

 

 

There’s a type of retro-style graphics in games these days (see the images above). I like that style a lot – don’t know why exactly. However, such graphics are primarily created manually by artists. It’s a time-consuming process, especially if those graphics are animated. It would be awesome if we could create retro-style low-resolution graphics using renderings of 3D geometry using a software like Blender. This would allow us to create or use existing 3d graphics, animate that and then somehow extract a retro-style rendering from that.

How to do it

Let’s try to recreate a small artefact from a promo graphics of Confederate Express.

the graphic we try to recreate (taken from promo pic of Confederate Express)

the graphic we try to recreate (taken from promo pic of Confederate Express)

Here’s a mesh created in Blender, which resembles the bag from Confederate Express and then let’s render it.

bag mesh in Blender

bag mesh rendered in Blender

The problem now is, that this doesn’t look at all like the reference image. There are 2 important differences:

  1. The pixel resolution of our imitation is way too large
  2. The reference graphic has much less colors

Scaling down

All right, if that is the whole problem, just let us scale our rendering down. In the reference image, the bag has a size of 27×24 pixels (omitting the shadow beneath the bag). Blender allows us to render in any resolutation we want.

16 samples with bicubic smoothing

Now that does not look just like we intended it! It’s all smoothed out and boring, no contrast left. What’s the problem? Blender created this image by combining – for each pixel – 16 subpixels into one. It’s called oversampling at is used for anti-aliasing. In this case we used a cubis filter. Now let’s deactivate anti-aliasing and see what happens.

anti-aliasing deactivated

That’s much better but the image looks garbled. Take a look at the handle of the bag, it’s not very well approximated by such few pixels. So, here’s the deal: Without oversampling, we risk missing important details on the object. No oversampling means no subpixels means there are only 27 times 24 rays probing the object and if they miss an important detail, it won’t show in the rendering.

Blender provides us with other smoothing methods, showing less smoothed out results. For example, there’s „Catmull-Rom“.

16 samples with catmull-rom smoothing

16 samples with catmull-rom smoothing

The result is – compared to cubic – not bad, but both have a major issue: They’re using the alpha channel. The pixels on the silhouette are partially transparent. That is not an option for our retro-style graphics.

The anti-aliasing in these images can be understood as a method for downscaling the image. The subpixels can be seen as parts of a high-resolution rendering.

So what we want is some other method for downscaling our graphics, a method which is preserving or even amplifying small details in our image.

Reducing the palette

Besides downscaling the image, the second requirement to achieve retro-style graphics is to reduce the numbers of colors on the image. Early graphic card could only display 2, 4, 16 colors. A low number of colors is a typical feature of pixel art.

Color reduction is not the most important step in our case, because the reference image is composed of 44 individual colors, which is already quite large. Reducing the rendering with no anti-aliasing to 44 colors results in the following sprite:

palette reduced to 44 colors

palette reduced to 44 colors

Timothy Gerstner’s Pix

Timothy Gerstner wrote a master thesis „Pixelated Abstraction“ dealing with the very requirement I described above. On his webpage, you can find his thesis, 2 papers concerned with the same approach as well as executable code (which is of delightfully high quality!).

Gerstner’s approach does not only handle the downscaling but also the palette reduction.

Using his code I managed to create the following sprite

this version was created by Timothy Gerstner’s algorithm

Though the result does not resemble the reference very strongly I have to admit: it looks hand-crafted. But since Gertner’s algorithm has a couple of tweakable parameters and it is designed for semi-automatic execution, I might just have used an unfavourable set of parameters.

I have branched Gerstner’s code to add a command-line interface as well as to fix some minor platform-compatibility bugs in the code. This is currently WIP.

Other approached

  • Content-Adaptive Image Downscaling by Kopf et al. Looks promising, pseudocode for the whole algorithm is provided, no working implementation known to me. Is citing Gerstner’s paper.
  • Pixelating Vector Line Art by Inglis et al. They concentrate on downscaling vector graphics. Could be interesting for downscaling the emphasized silhouettes often found pixel art.

Tracing assembler code with minimally defined state

Given the following assembler code, we never have seen before:

push 34h
call check_stacksize
push ebx
push ecx
push esi
push edi
push ebp
mov ebp, esp
sub esp, 1Ch

mov esi, eax
mov [ebp+dt], edx
cmp [eax+person.index], 0
jl short 1BBA5
cmp [eax+person.name], 0
jz short 1BB6B
Figure 1: a complicated routine

Figure 1: a complicated routine

What does it do and how do we find out? The code snippet is quite simple, so we could simply trace it ourselves, following the flow of control through the jumps and calls and deduct from that, what its purpose is and how it works in detail. As you can see, we have some semantic information („person.index“, „person.name“) which is crucial to understand what the code does.

But what do we do if your program gets more and more complicated, something like in Figure 1? It will be much much more complicated to deduct the purpose and function of the code there. Why?

Let’s look at how we normally approach the question „what does this code do?“ (independent of whether it is written in Assembler, Basic or any other language).

The first aspect is the purpose of a piece of code. Because we can assume that it was written by a human being, we can also assume that the structure of the program – even if the compiler did heavy optimization – still reflects the author’s approach to the problem. There will propably be a modularization, a separation of code by task and timing. Even in assembler code, statements following each other will – most of the time – have something to do with each other, work on the same task. Therefore we can identify chunks of code which can be understood as actions and tasks (open a file, read a specific object from file, etc.). And, most important, there will be a sense of purpose. Code doesn’t just sit there for the sake of being, it has a purpose and this purpose is connected to the purpose of the whole program. If this code looks into the properties of a „person object“, it will propably do something relevant with that data. By examining which properties the code reads and writes, what other actions are performed in its vicinity and which kinds of objects it combines, we can narrow down the set of possible purposes of that code. To sum this up: Even for a piece of assembler code it is possible to split it into chunks and understand the whole piece in a more abstract sense, assign a purpose to it.

The second aspect of the question „what does it do?“ is connected to the specific behaviour of the code for a given state of the system. For example, if we have a initialized person and we run our code on it, how will the state of the person change exactly? This second aspect of the question often is much more complicated to answer than the first, because it normally requires a line-by-line tracing of the program flow. Why? Because in assembler code, the state of the program is largely represented by the state of the CPU. The CPU does own only a limited number of state variables. It is like reading a C listing in which the number of variables has been limited to six variables. There will be a lot of value-switching and temporarily parking. And all this state we have to keep in mind during tracing. The longer and complicated our code gets, the more state we have to keep in mind. A possible solution might be to break down the problem into smaller chunks by identifying partitions of the code which work indepentently of each other. But the very task of ruling out the dependency between to pieces of code often required a deep understanding of the state of the CPU and the memory at as specific point of execution.

It seems as if we cannot avoid tracing code. Debuggers were invented for that case. They introduce the great advantage that they keep track of the state (CPU and memory) for us. We can concentrate on the semantically relevant points in the code and forget the rest.

But what if we don’t have a full, executable program at our hand? If we had, we would also have to worry about whether our piece of code is executed at all. We would have to provide a lot of state (hardware state, memory, operating system, execution environment) just to execute our little snippet! What if we just want to understand understand this little, isolated piece of code? A classical debugger is always restricted to follow a single path of execution, given its state. But we might want to know what happens if we take the other road at a specific junction. Does the result of the execution even depend on a specific state variable?

Those questions cannot be answered by classical debugging techniques. What we require is

  1. Minimal state. We want to provide only those state variables which are important for the behaviour of our code. If the state of the graphics card is never queried, we don’t want to provide it.
  2. Alternative paths. We want to be able to overview all the possible alternative routes of execution through our code and their dependency on the „input variables“.

What we want might be symbolic execution. A technique developed to (automatically) prove the correctness of programs. And this is the idea, I currently work on. There are frameworks to do symbolic execution of binary programs, already (https://github.com/feliam/pysymemu). But I don’t want to do it with a complete program bug simply an isolated piece of assembler code.

Hitflop und Co, Vorsicht bei Tauschportalen

Tja, die schöne neue Welt des Tauschens. Man liest es überall. Die kulturelle Avantgarde Deutschlands diskutiert im Feuilleton über die „besitzlose Gesellschaft“, „Sharing Economy“ und wie das alles heißt. Nachdem ich zum Frühstück in der neuen brandeins gelesen hatte, deren Thema natürlich auch das Tauschen war, dachte ich mir: Ich hab da im Keller seit Jahren die Kartons voller oller Horrorfilme herumstehen, die vermodern langsam, probieren wir das Tauschen damit doch mal aus. FSK-18-Filme loswerden ist sowieso nicht so einfach. Ebay erlaubt keine Verkäufe für Artikel ohne Jugendfreigabe. Aber Tauschportale wie hitflip.de und tauschticket.de ermöglichen dies.

Wikipedia wusste nichts abgrundtief böses über Hitflip oder Tauschticket zu berichten. Ich meldete mich in meiner Anfängernaivität also bei hitflip.de an und fing gleich an, ein paar meiner Filme in das Portal zu stellen. Zu jedem Film machte mir die Seite Vorschläge für den Preis. Nicht in Euro sondern in „Flips“. Das ist die virtuelle Währung auf Hitflips.de. Hat vielleicht rechtliche Gründe, dachte ich mir. Wenige Minuten, nachdem ich meine ersten Filme eingestellt hatte, kamen auch schon die ersten Tauschanfragen. Auf Hitflip gibt es eine Wartelistefunktion. Dabei dachte ich mir erst einmal nichts. Jemand hat scheinbar vor Tagen, Wochen, Monaten eine Anfrage für einen meiner Filme gestellt, einen Maximalpreis und einen Mindestzustand festgelegt und diese sprang jetzt an. Das gefiel mir. Ich hatte mich schon darauf eingestellt, die DVDs wochenlang weiter bunkern zu müssen, immer auf Abruf, falls jemand sich mal dazu entschließt, eine davon zu nehmen. Ruckzuck waren 10 meiner DVDs raus. Da bekam ich den ersten Schrecken. Es gab keine Versandkostenerstattung in irgendeiner Form und FSK-18-Filme darf ich nicht einfach so per Post verschicken. Ich bin gesetzlich verpflichtet, geeignet sicherzustellen, dass keine Minderjährige die Sendung öffnen kann. Das heißt: Einschreiben + Eigenhändig. Kostet 5,20 Euro bei der Deutschen Post. Argh. Meine Filme hatte ich teilweise für 3 Flips angeboten, je nach Empfehlung von Hitflips. Jetzt ergaben sich dadurch Verlustgeschäfte für mich. Ich entschied mich, einige der besonders ruinösen Tausche zu stornieren. Hitflip wies mich darauf hin, dass bei zu vielen Stornos ich abgestraft werden könnte durch schlechtere Wartelistenplatzierungen. Okay, dann straft mich doch ab. Nach einem Vergleich mit dem Konkurrenten Tauschticket.de fand ich Hitflip sowieso ganz schön teuer. 0,99 Eur pro Film fallen für den „Käufer“ an, bei höheren Flippreisen sogar noch mehr. Das reichte mir jetzt. Ich löschte schnell alle noch nicht getauschten Filme von Hitflip. Versuchen wir es also bei Tauschticket. Das klappte erst einmal leider nicht, weil deren System sich an meinem 20-Zeichen-Passwort heilends verschluckte und Datenbankfehler meldete. Dass es daran lag fand ich erst am nächsten Abend raus.

Um eine ärgerliche Angelegenheit abzukürzen (vielleicht schreibe ich noch einen längeren Artikel über meine Erfahrungen in Welt des Online-Tausches): hitflip.de erscheint mir wie ein sinkendes Schiff. Ich habe für einen Film, den ich für größenwahnsinnige 30 Flips eingestellt habe (um mal zu sehen, ob den jemand kauft) nach 24h auch tatsächlich 30 Flips erhalten. Erst dachte ich: Wie bitte? Dann beschlich mich ein ungutes Gefühl. Es gibt dort womöglich mengenmäßig noch viel einzutauschen, aber nichts von Qualität. Die DVDs, die es dort noch gibt, würde ich mir nicht einmal schenken lassen. Und dank der virtuellen Währung bin ich aber gezwungen, dafür sogar 99 Cent zu zahlen. Denn mit meinen 100 Flips auf dem Konto kann ich ansonsten nichts anfangen. Das ist die Falle dieser Tauschportale: Wer nicht ganz genau darauf achtet, dass er für seine virtuellen Euros/Flips/Billets/Tickets/Tokens auch etwas bekommt, was er haben will, für den ist die Aktivität auf so einem Portal ein einziges Verlustgeschäft. Ich will mich nicht beschweren, ich habe ein paar Horrorfilme gegen die wertlosen Flips getauscht. Hinzu kommt das Porto, was ich blechen musste. Damit komme ich vielleicht auf 50 EUR Verlust. Das ist nun wirklich glimpflich ausgegangen. Wer weiß, welche Traumschlösser an Flips manche andere Nutzer dort aufgetürmt haben? Vermutlich waren 30 Flips für meinen Film, der tatsächlich ganz ordentlich ist, sogar noch zu wenig. Vielleicht hätte ich auch 100 Flips dafür erhalten. Aber was hätte ich davon? Für einen Wirtschaftswissenschaftler ist die Platform vielleicht ein interessanter Forschungsraum. Werden die Preise jetzt ans mögliche Maximum steigen, weil die Leute bereit sind, irrsinnige Preise zu zahlen, um ihre Flips für etwas halbwegs brauchbares zu tauschen? Werden sie den ganzen nutzlosen Ramsch (irgendwelche Single-CDs aus den 90ern) notgedrungen kaufen? Was wird aus meinen einsamen 100 Flips, wenn Hitflips.de dann dicht macht?

Was Hitflip besonders gespenstisch macht ist, dass die Platform in der Regel nicht eine Liste aller angebotenen Artikel anzeigt sondern eine Liste aller anbietbaren oder jemals angebotenen Artikel. Wenn man sich beispielsweise in die Kategorie Filme->Komödie begibt, dann findet man dort 2 Bildschirmseiten von unverkäuflichen Holzklasse-DVDs und dann nur noch tausende von Bildschirmseiten an existierenden Komödien, die niemand auf hitflip anbietet und wahrscheinlich auch nie mehr jemand anbieten wird (wenn er nicht so blöd wie ich ist…). Neben jedem dieser Gespenster steht einfach nur „kein Angebot“ oder „wird nicht angeboten“. Man kann sich natürlich auf die Warteliste setzen lassen. Es ist wie eine Geisterstadt, in der verlassene Häuser herumstehen, überall steht „zu vermieten“ oder „zu verkaufen“…gruselig. Übrigens bereitet es mir auch ein gewisses Vergnügen, mir vorzustellen, dass es Nutzer mit vielen Flips und viel Freizeit gibt, die aus lauter Verzweiflung sich in jede dieser Wartelisten für nicht angebotene Artikel eintragen.

Meine Empfehlung für den Beitritt zu solchen Plattformen deshalb:

Erstmal einen genauen Blick ins Angebot werfen, was denn überhaupt noch verfügbar ist. Lässt sich herausfinden, wieviele Artikel angeboten werden? Und nach einem Beitritt: Nicht gleich die ganze verdammte Sammlung einstellen, nur weil es so schön einfach ist! Mit einer DVD starten, ruhig den Preis deutlich zu hoch ansetzen und abwarten. Wenn sofort jemand zuschlägt, dann stimmt etwas gehörig nicht.

Reconstructing Ecstatica’s Level Geometry MK2

Amazing! As it turned out, what I expected to be the navigation- and collision mesh is actually almost the complete level geometry including walls, stairs, windows, etc. What’s missing is triangle-based geometry like roofs and windows as well as all the static ellipse-based models (flowers, grass, etc.). Nontheless this are great news, because the whole game world can now be explored and looked at in 3D!

Extracted Level Geometry from Ecstatica

Extracted Level Geometry from Ecstatica

Reconstructing Ecstatica’s Level Geometry

Alone in the Dark 3

Alone in the Dark 3

Ecstatica 1 takes place in a small village upon a rock, surrounded by a deep valley. The walkable area is restricted to this very rock, but still it is quite big and it is open. Not just a bunch of rooms but an exterior area. If you take a closer look at Ecstatica’s backgrounds, you will be puzzled by the really high-quality of the level geometry. Yes, there is real geometry behind the backgrounds. It even looks like not only the characters are all made of Ellipses but the walls, the floor and the plants as well. Those stone floors and walls look just like something you would use displacement mapping or metaballs for today. The Compare that to titles like „Alone in the Dark“ (see screenshot below) which were using painted environments. Alone in the Dark is quite a visually appealing title. But having actual geometry to render the backgrounds from is something different. Remember: When Ecstatica was shipped it was 1994!  How the hell did those Britons do this? This was not a multi-million dollar production. On what hardware were they able to create and render this vast amount of geometric data?

Screenshot demonstrating the high qualify of Ecstaticas level geometry

Screenshot demonstrating the high qualify of Ecstatica's level geometry

I recently worked on some code doing a reconstruction of the level geometry based on the set of about 240 background images. I managed to mix this reconstructed geometry with the collision- and navigation mesh. The current state already looks interesting. I propably won’t be able to explicitely reconstruct the complete geometry…

  • the available camera views are too sparsly distributed
  • the texture infomation I reconstruct is quickly diminishing in quality as you get farer away from the camera due to the projection effect
  • It would require a LOT of post-processing in order to merge and clean up the generated mesh data
  • It’s actually too much geometry. Given my 8 Gigs of RAM I’m unable to represent all 340 backgrounds in one blender session – even though I threw away around 99,5% of all generated polygons (using the „Decimate“ modifier).

How did they do it in 1994? My current assumptions are

  • They chopped the game world into chunks swallowable for 1994 hardware
  • For individual chunks they used some CAD software to create coarse geometry (walls, floors, stairs, etc.)
  • The used a custom raytracer to shoot rays through the world, querying the relevant geometry chunk and then, as a final step, they generated the ellipses on-the-fly using some form of texture mapping. I base this assumption on the fact that the stone walls show certain patterns. The basic principle of this last step actually is rather simple. Instead of calculating the ray intersection with a plane from the coarse geometry you put a set of ellipses there, each only described by its center and its radii. Because you’re systematically scanlining over the sensor area of your virtual camera, you should be able to keep only those ellipses in memory which are actually hit by rays.
Current state of my attempt to reconstruct Ecstatica's level geometry

Current state of my attempt to reconstruct Ecstatica's level geometry

Python script to calculate the Madelung constant of an infinite lattice

During my Diploma thesis in Physics I had to calculate the Madelung constant for a lattice of ZnO supercells. For this purpose, I implemented a small Python programm. The only prerequisite is SciPy. The script calculates the Madelung constant for an infinite lattice of point charges, neuralized by a counter-charged jellium background. All documentation is inside the file madelung.py. There’s another script sc_mad.py (supercell madelung constant) in there which demonstrates reading a POSCAR file describing an atomic lattice and calculating the Madelung constant for it.

Maybe someone can so something useful with the stuff. I would be happy about feedback whether the implementation actually works for your specific case 🙂

https://github.com/fHachenberg/pymadelung

Approaching a breakthrough

Hi, It’s been a lApproachingBreakThroughong time again since my last update. For some months I was lacking motivation to continue work on my Ecstatica project. But now during the last week of my Christmas vacation, I started to work on it again and finally all the frustrating research into the data structures employed to describe characters in Ecstatica is starting to show signs of success. I think I have a pretty good picture now how Ecstatica is doing it and therefore I was able to put this knowledge into a little Python script run inside Blender to generate Ecstatica characters.

As you can see, the main character is missing its nose. I currently assume that it is described by a triangles. Triangles are not generated yet. I’m working on that. After that, the next milestone will be to translate character animations into blender animations.

Weekly Update

Hello out there, a lot of time has passed since my last update. I was very busy in real life but this weekend I finally found some time again to work on my research on Ecstatica.

I am now able to read the z-data in the view data files (in subfolder „views“). The attached screenshot shows such a piece of data.

depthbuffer

Currently I’m working on a number of Blender plugins to import Ecstatica data files like

  • characters and their animations
  • puzzling together the complete game world from all the camera views.

Ubuntu 11.10: Crackling noise in VLC audio dependent on CPU usage

When trying out VLC in Ubuntu 11.10 today, I had the problem that the audio output of VLC contained crackling, sizzling noise. The higher the CPU usage was (loading a program for instance), the more intense the noise was.

After playing around with the config panels I realized, that pulse audio was set to „Dolby Surround 4.0“ (in the system’s main audio config dialog). Changing this value to „Stereo Duplex“ solved the problem.

Weekly Update

Ecstatica

I have found out this week, that Ecstatica FANT files are actually stacks of FANT files. For example in „estatic.“, there are actually present around 2000 FANT files. The file „offsets.“ includes offsets into the file which are used to access the large file at the correct offset. In „ecstatic.“ the single FANT files often contain only a single sound or a single list of scene events.

So I had to expand my code in order to handle the lists of FANT files and it does now. The XML export- and import seems to work now also for the case of multiple FANT „objects“ within one file.

I never had tested my XML export/import code with sounds yet. A problem arised: Originally I inteded to include the binary wave data for each sound in a CDATA section within the XML file. But it turned out that this is not possible, because XML forbids certain characters even within CDATA sections. So what I do now is I export the wave data into separate binary data files and include a reference into the XML data. That way I was now able to completely read in „ecstatic.“, export it to XML, reimport it from XML and export it to FANT file format again. The resulting FANT file (correctly: stack of FANT files) is binary equivalent to the original „ecstatic.“ file.

Procedural Graphics

Today I was travelling across a number of sites about procedural terrain generation and procedural graphics in general.

Here are my links:

Other Stuff

I stumbled across this amazing flash animation visualizing on what different scales things in our universe exist. Take a look at the main site also, because there’s a lot of other original flash stuff avaible.