As I was browsing the web a few days ago, I started noticing that every time I scrolled down on the mouse wheel, Chrome would zoom the page to 90%.

At first I thought Ctrl was stuck, but after rebooting and testing a bit, I found out that it was not the problem. The culprit was a util I use called KatMouse

This I found after searching and finding this post

Michael Uttley writes at the Google product forums:

It looks like the zooming issue is related to mouse scrolling freeware. From seeing the comments below about “KatMouse”, I took a look at a tool I run called “WizMouse.” The purpose of these tools is to allow scrolling windows without focusing on them. Settings can be changed in these to fix the problem.

Resolutions:

KatMouse (per nsx82): Add class: Chrome_WidgetWin_1, and under the settings tick option: “Window  has scrolling wheel support?” (try checked first)

WizMouse (per JoPa Mi): Uncheck “Enable mouse wheel for applications without mouse wheel support”

Last night, I was exporting a video I’ve been working on. And since I’ve lately become very interested in color correction in video, I was not very pleased to see that the black level in the video was shifted to brighter, as I played it back in VLC.  It looked like the black was clipped at broadcast levels. In short, it looked like the black was set to (R,G,B) (16, 16, 16) instead of the (0, 0, 0) that I had be working towards.

Searching the web I found lots of people having problems with gamma in mp4/h264 files from Premiere exported with QuickTime, but I didn’t use QuickTime, and a gamma shift wouldn’t move the black level. I also tried a different codec, and so my thoughts started to shift towards VLC.

It turns out that VLC was partly the problem. In my search I found this blog entry by Ben Krasnow which pointed me in the right direction.

In the NVIDIA Control Panel you can go to “Adjust video color settings”, “How do you make color adjustments”, I chose “With NVIDIA settings”, clicked the “Advanced” tab, and set the dynamic range to Full.

Or, in VLC I could go into Tools | Preferences | Video  and disabled “Use hardware YUV->RGB conversion”

Seems like VLC is not capable to tell the NVIDIA drivers that it wants the full range, and so the driver clamps it to broadcast safe.

I made a small utility in Max4Live to display the frequency of the note played.
It also displays the frequency four octave above and below, in addition the note name, and the midi number.

Not a big deal, but can be useful in certain situations, to avoid having to do the math yourself.

I was trying to set up mingw/msys with freeglut and Glew, to be able to do some programming in OpenGL. None of the instructions I found seemed to be just right, so I’ve tried to document the steps I needed to get it to work.

Get the latest mingw-get-inst from SourceForge (tested with mingw-get-inst-20120426):
http://sourceforge.net/projects/mingw/files/Installer/mingw-get-inst/

Run it, and install all the packages.

Open up an msys shell window. Unpack freeglut, and configure, build and install with the following commands:

./configure --with-x=no --prefix=/mingw
make
make install

Download the source from Glew (tested with 1.9.0):

http://glew.sourceforge.net/

Unpack in the msys shell, and build and install with the following commands:

make
make GLEW_DEST=/mingw install

That should have you up and running.

Some tutorials and stuff will tell you to link against library names that’s not quite right in mingw.

I built the first example from here: http://en.wikibooks.org/wiki/OpenGL_Programming/Modern_OpenGL_Introduction

with the following command:

g++ triangle.c -lglut -lglew32 -lopengl32 -o triangle

Hope this is of any use.

I have used Processing with an external editor for years now, and I relied heavily on the “use external editor” option in the IDE. That option is now gone, and has been replaced by a command line option instead.

At first the removal of that option annoyed me, but it has grown on me after I managed to set it up properly.

To set it up in Notepad++ you have to install a plug-in called NppExec. That can be done with the plugin manager.

When that is done, open up the source file for a Processing sketch, and press F6.

As command enter:

C:\path\to\processing\processing-java.exe --run --force --sketch="$(CURRENT_DIRECTORY)" --output=c:\temp\p5 Just change it to point to the correct path, and for output have some temporary folder set up. Works like a charm As I was also tipped about Sublime Text on Twitter after posting this, I decided to try it out. I sat up a new build system with the following lines: { "cmd": ["C:\\path\\to\\processing\\processing-java", "--sketch=$file_path", "--run", "--output=c:\\temp\\p5", "--force"] }

Which also seems to work very well. As for how that editor is, I guess I will find out.

For The Gathering 2012 there is a music compo which requires you to only make music using the provided VSTi, called SaneStation.

Using it in Ableton Live, and conform to the rules in the compo might be a bit tricky, so I will try my best to describe how to set it up.

I presume you have been able to download and install the VSTi yourself, and that you see it in the VST-list in Ableton Live.

Start up a new project, and drag the sanestation vsti into a MIDI-track. Rename the track to SaneStation. It is not important, but I will persume you did, so it makes it easier to follow along here. This will be the main track for controlling the synth, and editing patches and stuff.  You can only have one instance of SaneStation, or it will probably crash, so keep one in there.

Now, add another MIDI-track. (Ctrl-shift-T)

Set “MIDI To” to point to “SaneStation”

In the dropdown box underneath, chose “1-sanestation”.

Add a new MIDI-clip in the track, and set both “bank”, “sub-bank” and “program” to 1.

Then add some notes and hit play.

To tweak the sound, select the SaneStation track, and the VSTi interface should pop up.

Make a cool sound.

Add another MIDI track.

Set MIDI To to SaneStation and 2-sanestation

Add a MIDI-clip, and set “Bank” and “Sub-Bank” to 1, “Program” to 2. Add some notes, hit play.

To edit the sound for this channel, select the SaneStation track again, and in the Track View-pane select “Instrument 1″ in the sanestation contoller.

You can now tweak the sound in the VSTi GUI.

Repeat this for aditional tracks.

Do all your composing and arranging and stuff like that, as usual.

When you are done, and ready to export you have to do the following.

In arrangement view

Make sure that every track starts at the same time,fill in with blank clips if necessary.

For each track, select all the clips in the track.

Right click, and choose “consolidate”

Right click again, on each track, an choose to export midi clip.

Name them wisely.

Open the VSTi-GUI for SaneStation, and export the soundbank to the same directory as all the midi-clips.

You should now be able to put all the files together with the compile-utility that came with SaneStation. Refer to that manual/readme for how it is done.

There might be easier and/or better ways to do this in Live, but this was the thing I figured out could work, and it did in testing, so…

Let me know if something is hard to understand, or if there are any problems.

There is also some information about using VSTs with multiple channels in an article from SoundOnSound

I just installed the Processing JS WordPress plugin, and this is just a quick test to see if, and how, it works.

I constantly come up with new ways I think I will use to keep track of the charge status of my camera batteries, but I tend to forget between each time I am on a shoot, so it is kinda silly.   The other day I came up with this:

I used a marker, and added a plus sign to the battery covers, on one side. When the battery is fresh, the cover is put on so that the plus faces the contacts. When it is flat, the other way around. Now it is easy to see of I have just charged, or used up whatever battery I fish out of my bag.
I also added a piece of gaffers tape on the “negative” side, in case the plus gets rubbed off.

I am working on making a bass synth. It will be controlled by some old organ pedals, but currently it just works by adjusting pots.

In the schematics you can see the voltage regulator in the top left. I use 12volt DC in, from an old PC power supply. Which I regulate to 9 volts.

The 4093 contains four NAND gates with Schmidt triggers, and I use two of them. One controls the pitch of the sound, and the other controls the first one, by turning it on and off, so you get kind of an arpeggio. You can also turn that on and off with SW1.

The 4040 is a frequency divider, that is fed the output of the tone oscillator, and then each of the outputs of that is fed into two rotary switches. In that way you can mix together two octaves at the same time, getting a richer sound.

Most of this project was inspired by the book “Handmade electronic music”, by Nicolas Collins.

If you have any questions, or suggestions for improvements, please drop me a line.

At the last half of 2010, I was contacted by one of the professors at the department of music technology at NTNU, to do a work for them, and I gladly accepted.

They wanted some sort of permanent installation that would make their hallways a bit more interesting, and they had previously seen one of my prints from the “Wasted time” project, and so I used that as a basis when I started thinking.

"Wasted time, 2009-03-27 11:49:01"

I also wanted to do something that was tightly connected with the department, and what they do, so sound would have to be, in some way, an element in the work.

As the work was to be permanent, and is to be there for a long time, I wanted to make something that would need little to no maintenance, and not have the risk of stop working in some way. It should also not be to obtrusive, since people will need to walk past it every day, and I don’t want it to end up be an annoyance to the people who use the premises. With that in mind, I decided early on that I wanted to make some sort of generative prints, and started checking out possibilities at a prints shop. The choice I made was to make prints on acrylic plates.

I went to the location, and after deciding where I wanted the plates to hang, when they were done, I recorded the ambient sound in the hallways, with microphones placed at the spots where the pictures would be. The sound was then cut to find some interesting segments and then normalized. I then used the sound as data for drawing curves, circles and lines. This was done with utilizing the language Processing and the library minim.

For more information you can read my more detailed technical description of the work.