Category: Programming

Installing mingw/msys with FreeGlut and Glew

I was trying to set up mingw/msys with freeglut and Glew, to be able to do some programming in OpenGL. None of the instructions I found seemed to be just right, so I’ve tried to document the steps I needed to get it to work.

Get the latest mingw-get-inst from SourceForge (tested with mingw-get-inst-20120426):
http://sourceforge.net/projects/mingw/files/Installer/mingw-get-inst/

Run it, and install all the packages.

Some more info here: http://mingw.org/wiki/Getting_Started

Download the source for freeglut here:
http://freeglut.sourceforge.net/index.php#download (tested with 2.8.1)

Open up an msys shell window. Unpack freeglut, and configure, build and install with the following commands:

./configure --with-x=no --prefix=/mingw
make
make install

Download the source from Glew (tested with 1.9.0):

http://glew.sourceforge.net/

Unpack in the msys shell, and build and install with the following commands:

make
make GLEW_DEST=/mingw install

That should have you up and running.

Some tutorials and stuff will tell you to link against library names that’s not quite right in mingw.

I built the first example from here: http://en.wikibooks.org/wiki/OpenGL_Programming/Modern_OpenGL_Introduction

with the following command:

g++ triangle.c -lglut -lglew32 -lopengl32 -o triangle

Hope this is of any use.

Using Processing 2.0 with Notepad++ or SublimeText as external editor

I have used Processing with an external editor for years now, and I relied heavily on the “use external editor” option in the IDE. That option is now gone, and has been replaced by a command line option instead.

At first the removal of that option annoyed me, but it has grown on me after I managed to set it up properly.

To set it up in Notepad++ you have to install a plug-in called NppExec. That can be done with the plugin manager.

When that is done, open up the source file for a Processing sketch, and press F6.

As command enter:

C:\path\to\processing\processing-java.exe --run --force --sketch="$(CURRENT_DIRECTORY)" --output=c:\temp\p5

Just change it to point to the correct path, and for output have some temporary folder set up.

Works like a charm

As I was also tipped about Sublime Text on Twitter after posting this, I decided to try it out.

I sat up a new build system with the following lines:

{
"cmd": ["C:\\path\\to\\processing\\processing-java", "--sketch=$file_path", "--run", "--output=c:\\temp\\p5", "--force"]
}

Which also seems to work very well. As for how that editor is, I guess I will find out.

Processing.js plugin for WordPress

I just installed the Processing JS WordPress plugin, and this is just a quick test to see if, and how, it works.

My first commissioned work

At the last half of 2010, I was contacted by one of the professors at the department of music technology at NTNU, to do a work for them, and I gladly accepted.

They wanted some sort of permanent installation that would make their hallways a bit more interesting, and they had previously seen one of my prints from the “Wasted time” project, and so I used that as a basis when I started thinking.

"Wasted time, 2009-03-27 11:49:01"

I also wanted to do something that was tightly connected with the department, and what they do, so sound would have to be, in some way, an element in the work.

As the work was to be permanent, and is to be there for a long time, I wanted to make something that would need little to no maintenance, and not have the risk of stop working in some way. It should also not be to obtrusive, since people will need to walk past it every day, and I don’t want it to end up be an annoyance to the people who use the premises. With that in mind, I decided early on that I wanted to make some sort of generative prints, and started checking out possibilities at a prints shop. The choice I made was to make prints on acrylic plates.

I went to the location, and after deciding where I wanted the plates to hang, when they were done, I recorded the ambient sound in the hallways, with microphones placed at the spots where the pictures would be. The sound was then cut to find some interesting segments and then normalized. I then used the sound as data for drawing curves, circles and lines. This was done with utilizing the language Processing and the library minim.

For more information you can read my more detailed technical description of the work.

Contour tracing in Processing

I recently read the book Aarons code, and with that the thought of writing some contour tracing functions came up. When I also, a few days later got an idea for a project, I got to it. Do searches on Google pointed me in the direction of the blob extraction algorithm that OpenCV uses, and so I found “A linear-time component labeling algorithm using contour tracing technique”.

Lately I have also started looking into using NetBeans as an IDE for programming with the Processing library, so I decided that I would implement the algorithm in the paper in java, using Processing.

The code is provided below, and is provided as is. It can probably be optimized a bit, and/or rewritten to be better. But for now it works for me.

If you have any suggestions to changes, or you use it, please leave a comment so I can check it out.

 /*
 * An implementation of
 * «A linear-time component labeling algorithm using contour tracing technique»
 * by Fu Chang, Chun-Jen Chen, and Chi-Jen Lu
 *
 * Written in Java for use with the Processing (www.processing.org) library.
 *
 * Code written by Greger Stolt Nilsen
 * (gregersn@gmail.com / http://gregerstoltnilsen.net)
 * Copyright 2010, Greger Stolt Nilsen
 *
 * Code is provided «as is», and no warrenty is given. Use at your own risk.
 *
 * This program is free software: you can redistribute it and/or modify
    it under the terms of the GNU General Public License as published by
    the Free Software Foundation, either version 3 of the License, or
    (at your option) any later version.

    This program is distributed in the hope that it will be useful,
    but WITHOUT ANY WARRANTY; without even the implied warranty of
    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
    GNU General Public License for more details.

    You should have received a copy of the GNU General Public License
    along with this program.  If not, see <http://www.gnu.org/licenses/>.

 */
/*
 * Create a new instance with:
 * ComponentFinder finder = new ComponentFinder(this, image);
 * where "image" is the PImage (or PGraphics) to be traced.
 * use the find()-function, to get all the contours in the contours arraylist.
 * Each contour is by itself an ArrayList of PVectors.
 *
 * render_blobs() will return a PImage object that is a rendering of all the
 * blobs in the traced image.
 *
 * The Image to be traced need to be black (0x000000) objects on white background.
 * So use the threshold-filter to get to that.
 *
 * Also, unless the outer border of pixles is white, the finder will hang.
 */

package ComponentFinder;

import processing.core.*;
import java.util.ArrayList;

public class ComponentFinder
{
    PApplet parent;
    public ArrayList contours;
    int[] inout;
    int[] L;
    PImage I;
    int C;

    // An array with the 8 direction coordinates (x1, y1, x2, y2... x8, y8)
    int cd[] = {1, 0, 1, 1, 0, 1, -1, 1, -1, 0, -1, -1, 0, -1, 1, -1 };

    public ComponentFinder(PApplet p, PImage img)
    {
	parent = p;
	I = img;
	C = 1;
	L = new int[I.width*I.height];

    }

    int getNumContours()
    {
	if(contours != null)
	    return contours.size();
	else return 0;
    }

    public int find()
    {
		// Initialize the blob map to zero.
		for(int i = 0; i < I.width*I.height; i++)
		{
		    L[i] = 0;
		}

		// Make a new array list to store each contour.
		contours = new ArrayList();

		C = 1;
		
		int pos = I.width; // Start analyzing the image at the second line

		I.loadPixels();

		// Label the first line as blank pixels.
		for(int x = 0; x < I.width; x++)
		    L[x] = -1;

		for(int y = 1; y < I.height; y++)
		{
		    for(int x = 0; x < I.width; x++)
		    {
			if(L[pos] == 0) // Only process unlabeled pixels
			{
			    if(((I.pixels[pos]&0xffffff) != 0x000000))
			    {
				// White pixel
				L[pos] = -1;
			    }
			    else if(((I.pixels[pos]&0xffffff) == 0x000000))
			    {
				if((I.pixels[pos-I.width]&0xffffff) != 0x000000)
				{
				    // If the pixel above is a white pixel,
				    // then this is the start of an outer contour
				    L[pos] = C;
				    // Start tracing an outer contour
				    contours.add(contourTrace(new PVector(x, y), 0, C));
				    C++;
				}
				if(((I.pixels[pos+I.width]&0xffffff) != 0x000000) && (L[pos+I.width] == 0))
				{
				    // If the pixel below is a white pixel, then it is the start of an
				    // inner contour, and we get the id for the outer contour
				    // then start tracing the inner contour
					int l;
					if(L[pos] != 0 ) l = L[pos];
					else l  = L[pos-1];
					
					if(L[pos] != 0) L[pos] = l;
				        contours.add(contourTrace(new PVector(x, y), 1, l));
				}
				if(L[pos] == 0) if(L[pos-1] > 0) L[pos] = L[pos-1];
			    }
			}
			pos++;
		}
	}
	I.updatePixels();
	return getNumContours();
    }

    // Traces a countour at the given pixel S,
    // r is set to 0 for outer contours, and 1 for inner contours.
    // c is the label to give the contour
    ArrayList contourTrace(PVector S, int r, int c)
    {
	int next = 8;
	ArrayList cont = new ArrayList(32);
	PVector current = new PVector();
	cont.add(S.get());
	if(r == 0) // Outer contour
	    next = Trace((PVector)cont.get(cont.size()-1), 7, c);
	else if(r == 1) // Inner contour
	    next = Trace((PVector)cont.get(cont.size()-1), 3, c);

	if(next == 8) return cont; // The pixel is isolated.

	PVector T = new PVector(S.x + cd[next<<1], S.y + cd[(next<<1)+1]);
	cont.add(T.get());

	next = Trace((PVector)cont.get(cont.size()-1), (next+6)%8, c);

	if(next == 8) return cont;

	while(true)
	{

	    current.x = (float)((PVector)cont.get(cont.size()-1)).x + cd[(next<<1)%16];
	     current.y = (float)((PVector)cont.get(cont.size()-1)).y + cd[((next<<1)+1)%16];
		next = Trace(current, (next+6)%8, c);

		// Check if we are back at start position
	    if((current.x == S.x) && (current.y == S.y) && (current.x + cd[next<<1] == T.x) && (current.y + cd[(next<<1)+1] == T.y))
	    {
		return cont;
	    }
	    else
	    {
		cont.add(current.get());
	    }
	}
    }

    int Trace(PVector P, int n, int c)
    {
		for(int i = 0; i < 8; i++)
		{
		    // if pixel at position n+i is black, return n+i
		    PVector ni = new PVector(P.x+cd[((i+n)%8)*2],
					    P.y+cd[((i+n)%8)*2+1]);


			if((I.pixels[(int)(ni.y)*I.width+(int)(ni.x)]&0xffffff) == 0x000000)
		    {

				L[(int)(ni.y)*I.width+(int)(ni.x)] = c;
				return (i+n)%8;
		    }
		    // if not, mark with a negative number
		    else
		    {
				if(L[(int)(ni.y)*I.width+(int)(ni.x)] == 0)
				    L[(int)(ni.y)*I.width+(int)(ni.x)] = -1;
		    }
		}
		return 8; // Should only happen for isolated pixels.
    }

    public PImage render_blobs()
    {
		PImage blobs = parent.createImage(I.width, I.height, PApplet.RGB);
		blobs.loadPixels();
		parent.colorMode(PApplet.HSB);
		for(int i = 0; i < L.length; i++)
		{
		    if(L[i] > 0)
				blobs.pixels[i] = parent.color(PApplet.map(L[i], 1, C, 0, 255), 255, 255);//|0xff000000;
		    if(L[i] == -1)
				blobs.pixels[i] = parent.color(0, 0, 255);
		    if(L[i] == -2)
				blobs.pixels[i] = parent.color(0, 0, 0);
		    if(L[i] == 0)
				blobs.pixels[i] = parent.color(0, 0, 128);
		}
		blobs.updatePixels();
		return blobs;

    }
}



Picasa face detection to Lightroom

I have written a Perl script to move data gathered from Picasa’s face detection function, into the XMP metadata of my NEF files, to be used in Lightroom.

Continue Reading