CS 176 (Winter 2011)

 

Project 2 Description

 

 



In the next assignment you are asked to implement texture synthesis. Your program should:

- read in an exemplar (you can find interesting exemplars [and results] at http://www.cs.utah.edu/~michael/ts/examples.html and at http://research.microsoft.com/en-us/um/redmond/projects/ParaTexSyn/results.htm) and produce a larger synthesized texture on output (say 512x512)
- the program should support control of some parameters that are natural for your algorithm. One such *example* is the neighborhood size. Another should be how many passes to do, or different initialization choices. Don't go crazy, but you will likely discover that there are some parameters that are natural for user control at time of invocation (instead of time of compilation...)
- use Ashikhmin's idea of searching among shifted candidates only. Is it useful to throw in a few random search locations to "mix things up"? (try it...)
- you will need to think about initialization and boundary issues both with regards to the exemplar as well as the output texture
- consider the use of a bias image (in multi pass the output of the previous pass is the bias for the next pass). The bias image on the zeroth pass may be empty or a user supplied partial image (see Ashikhmin's paper). In any case, such a bias image will then be used to complete the L shaped neighborhood for search with pixels from the bias image.
- consider different synthesis orders (scan, sub sampled in parallel,...) and see whether they make a difference.

As a second part of the assignment choose *one* of the following
- do all texture generation in appearance space. You will need to transform the exemplar to appearance space. Use the code of Sam Roweis (if you wish) to accomplish this. The idea is described in this paper: http://books.nips.cc/papers/files/nips10/0626.pdf and the corresponding Matlab code can be downloaded at http://www.cs.toronto.edu/~roweis/code.html. You will need to think about a file format to write out the transformed and PCA reduced image as well as the associated basis vectors to be used by your synthesis algorithm.
- enhance your neighborhood search with the k-coherent neighbors idea. You should implement this as a separate pre process which annotates your input file with extra data at each pixel (the k nearest pixels in the neighborhood distance [this will depend on neighborhood size...]). Use brute force to find these. Keep in mind that you'll want to do this for different neighborhood sizes depending on what you find to be the right neighborhood size for a given image in your synthesis algorithm.

There will be a 50% bonus for anyone implementing both enhancements *together*

We will check your results in class. Do some examples that you find in the papers both where it works well and where it doesn't work so well. Remember what parameters you found to work well (likely a function of image content) so that you can tell us. See whether you can have some fun with biasing. As before include a pointer to your code and executable.

You will find that there are lots of little decisions (think of the boundary handling...) for which you have to make judgment calls as to what seems reasonable (sometimes you'll find a suggestion in one of the papers). This is fine. Just document it. Be prepared to try a couple of things. I.e., make sure you have some small examples to run things on and try out stuff. Another upshot of this is that it will be a good idea to keep everything very modular and to get started at least *thinking* about your code pretty much right away.