Monday, May 28, 2012

bubble vs decorated sort

There were some comments on the previous post about using a limited number of bubble sort runs, as opposed to a full sort of the whole list weighted by original position, which was an appealing argument in a Keep-It-Simple-Stupid kind of way.

So I'm running some performance tests comparing limited bubblesorting of lists against a decorated sort weighted by initial position.  Here is the code tested:

# This is run on the data eight times by the testing code
def bubblepass(sortlist):
    skip = False
    for a in xrange(len(sortlist)-1):
        if skip == False:
            if sortlist[a] > sortlist[a+1]:
                temp = sortlist[a]
                sortlist[a] = sortlist[a+1]
                sortlist[a+1] = temp
                skip = True
        else:
            skip = False
    return sortlist

def decsortundec(sortlist):
    templist = zip(sortlist,range(len(sortlist)))
    declist = [(item[0]+(item[1]/4),item[0]) for item in templist]
    declist.sort()
    return [item[1] for item in declist]

In performance tests on lists of 3200 random small integers, the decorated sort runs from 40-60% faster than the bubblesort, despite it creating three entirely new lists along the way.

decsortundec uses list comprehensions, which are a powerful, yet to me somewhat confusing, aspect of the language.  To my understanding they are a way of creating a new list from a source list quickly, and along the way you can modify each item of the original list and/or choose whether to include it or not.  Note that this code runs in Python 2.7; Python 3 list comprehensions create iterators instead of lists, so the return value must be explicitly turned into a list, using list().  (There might be a better way to do this, in Python 2, using functional programming, but my brain hasn't been twisted into thinking in that particular Lispy way yet.)

Notes:
1. bubblepass is not a pure bubble sort, because on our trip through the list we skip positions that have already been swapped from the previous position.  That doesn't matter in a true bubble sort where we're ultimately trying to sort all the items as quickly as possible, but we'd need it here since we specifically don't want particular items to travel far.
2. It is possible that PyPy will optimize the bubblesort better, maybe far better, since it uses fairly elementary Python, while the other uses list comprehensions which are mostly implemented in C by the virtual machine.
3. I'm new to writing list comprehensions (which decsortundec uses twice) so it's possible there's a more efficient way to write this.  Would anyone know of a better way to factor in an item's original index number into its sorting weight?
4.  It might be possible to use some of that list comprehension stuff on the bubble sort, but I don't see how to implement it off the top of my head.

Thursday, May 24, 2012

Weird simulation idea

I think I've had something of a breakthrough on this project.  Think about this for a bit:

Ultimately, the simulation is just a kind of inefficient sorting operation.  Fluids travel from higher places to lower places.  Other things get in the way.  If things aren't falling, then they might migrate side to side.

Sorting is a hard problem, but it's one that a lot of effort has been poured into.  Python particularly has some good sorts written in optimized C.

We already use Python's sort functions in the code, to arrange the positions of different slices of liquid in a single cell.  My idea is to extend it beyond that cell -- to sort whole columns of fluid at once.

The biggest problem with this is that sorting works too well -- heavy fluid will transmigrate from the top of a column to the bottom instantly, with no possibility of affecting anything along the way.  There needs to be some way to limit how far an individual slice of fluid can travel in its passage.  There are no sorts that I'm aware of that support this kind of travel limiting, but it might be doable using a Python trick more frequently used back in older versions, called Decorate-Sort-Undecorate.  That is, you first modify the contents to encode additional information into it that weights each item in the sort by some variable, in this case that would be initial position in the column.  The weights would have to be chosen in such a way that the fluid weights would overpower the positional weights over short distances but not long ones.  Then we sort the column, then remove the extra information (or in our case, overwrite it with new positional information the next time we do a sort).

Another problem is that we would still need to make another pass through each column to handle sideways motion.

How does that sound to you folk?

Tuesday, May 15, 2012

Vector tiles

I figure, caution to the wind.  I'm going to have a go at a completely vectorized graphical look for the game.  Not only will it give In Profundis a distinctive appearance, but it's both easier and faster to apply dynamic graphic effects to the game, such as, say, in making liquids look more visually appealing and to producing the "outline effect" for regions outside of the player's direct view.  It would also look somewhat closer to the pitch videos I posted to YouTube a year ago.

So, my current objective is to implement this system in pyglet.  After that, it's probably time to go in and scale back the random generation a bit, which after some thought I think is a bit too random.  The game world was more interesting when it was just a big blob of random shapes than the rather too strictly laid out shafts and corridors of the current system.

Monday, May 14, 2012

Pyglet Progress

There is a real sense of having gotten over a hump right now.  I have the beginnings of a pyglet tile engine worked up.  What is more, I have gotten the engine to render a 2D, hardware-accelerated, gradient-shaded polygon to my specifications.  It feels a little like I, a piddly little Python hacker, shouldn't have access to this kind of power.

Right now I'm having to fight hard to not turn the whole game into an Asteroids-style vector art extravaganza.  I might just do that yet.  I make no secret that I'm not happy with the look of the game generally.  The water and liquids look okay, but the stone looks like I did it up in 15 minutes in MSPaint (where actually, it took me a bit longer than that, and I used Paint.NET).

Just imagine it: an exploration game with neon beams shining around the screen, representing walls and liquids.  Yeah, I think that kind of thing is awesome.  The perfect arcade game, in my book, would be something with a Vectorscan monitor and a Williams sound chip.

Sunday, May 13, 2012

Ah-ha!

I've got a basic tile grid working under pyglet!  Now we are so happy, we do the dance of joy.

Pygame, although easy to use, has always been a performance bottleneck for the game because it's built on SDL and does its drawing in software.  Pyglet is a lot more closely coupled with OpenGL.

Currently, a game frame is drawn in three steps:
- Background layer/gasses
- Fluids/stone/other objects
- Sprites

My idea is to separate these things out into sprite batches and vertex lists in pyglet:
- Background layer/gases: vertex list
- Fluids: vertex list
- Stone and other objects: sprite batch
- Player and other objects: sprite batch

Fluids particularly have been something I've worried about; I stopped using tiles for those in Pygame some time back due to the lack of flexibility.  Fortunately the polygonal shapes I've been using in Pygame look like they map to pyglet fairly nicely.

What is more, switching to pyglet will make it easier to use the JIT compiler of Psyco's successor, PyPy, to get around the speed limitation imposed by Python's interpreted nature.

Saturday, May 12, 2012

pyglet Tile Engine Progress

This is just to let you know that I'm still working on the pyglet tile engine, here and there, and that it's making my brain hurt.

Reading up on some of the more advanced features of Numpy makes my brain hurt in a different way.  Whence comes the mindpain?  This is legal:


>>>import numpy
>>>testarray = numpy.array([17,23,94,81,3,76,12,52,80,35,102,69,52])
>>>boolarray = testarray > 51

Now, boolarray is a parallel array to testarray, containing only booleans.  In each slot in boolarray that matches testarray is True if the condition was true, and False if the condition was false.

What's more, we can now do this:
>>>testarray[boolarray]
array([ 94,  81,  76,  52,  80, 102,  69,  52])

That is, we can index an array with a parallel boolean array.  And the resulting array object isn't actually a separate array, but a view into the array.  If we change an element in this new array, what gets changed is the object in the original array.

Some similar things can be done in stock Python, I gather, using moderate-level Pythonic mojo like list comprehensions, but doing it using Numpy, if I'm understanding this correctly, has the advantage of being optimized.  Access to Numpy arrays benefit from all being the same type of data; behind the scenes, indexes into these arrays are done using C-style pointer arithmetic instead of Python list redirections.  Particularly, performing an action over every element in a Numpy array is supposedly fairly quick compared to the same thing in Python... which might provide a clue as to why I'm researching this feature.