This is my first post and what else I could start with!
Memory compression has been my pet project since last 2 years. It adds a compression layer to swap path – whatever OS swaps out will be compressed and stored in memory itself. This is huge win over swapping to slow hard-disks which are typically used for swapping. Biggest challenges are what to compress? how much to compress? how to manage compressed blocks? handling incompressible data and list goes on…
This is all basically shifting load over to processor to avoid slow disks. Now in multi-core era this seems to only gain in relevance – de/compression can be almost free if we can use all those cores smartly :)
Project home: http://code.google.com/p/compcache/