Skip to content

Developing With Docker (Day 2), caching debs using squid-deb-proxy

Today’s step is going to take a bit of a detour. One of the things that kills you when rebuilding a box is that you have to download dependencies every time from the internet. With big files this can be quite slow, and can keep you from working at all if you are offline.

So I’m going try and cache as many of these files on my host box as possible so this isn’t an issue. Of course if I add new dependencies and I’m offline, it won’t work unless I’ve somehow cached them before. But I should be able to rebuild the machine.

So if you pull down the repo, there will be a tag for Day 2.

Diff of the changes from day1 to day2 (git diff day1 day 2)

diff --git a/.gitignore b/.gitignore
new file mode 100644
index 0000000..909475b
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1 @@
+30proxy
diff --git a/Dockerfile b/Dockerfile
index d643899..641429e 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -1,5 +1,9 @@
 from ubuntu:12.10
 
+# set your host up as the apt cache for speed
+
+add 30proxy /etc/apt/apt.conf.d/30proxy
+
 # update system and install dependencies
 
 run apt-get update
diff --git a/hostconfig.sh b/hostconfig.sh
new file mode 100755
index 0000000..45754e8
--- /dev/null
+++ b/hostconfig.sh
@@ -0,0 +1,3 @@
+sudo apt-get install squid-deb-proxy
+HOST_IP=`ifconfig docker0 |grep inet|head -1|sed 's/\:/ /'|awk '{print $3}'`
+echo "Acquire::http::Proxy \"http://$HOST_IP:8000\";" > 30proxy
\ No newline at end of file
diff --git a/shell.sh b/shell.sh
new file mode 100755
index 0000000..50a0f87
--- /dev/null
+++ b/shell.sh
@@ -0,0 +1,2 @@
+#! /bin/bash
+docker run -i -t u1210_nodebase /bin/bash
\ No newline at end of file

* Ignore 30proxy which is a generated file so we don’t check it in accidentally
* In the Dockerfile, tell docker to copy the 30proxy file over to /etc/apt/apt.conf.d/30proxy, so that it’ll configure apt.
* Create a hostconfig.sh script which you can (and should) run to install squid-deb-proxy to your host and correctly configure the container to point at the host for it’s apt cache.
* Add a shell.sh script which makes it easier to shell into this instance.

Testing

So we can see this is working by opening a second terminal window and doing a “tail -f /var/log/squid-deb-proxy/access.log”.

And then running build.sh in the original terminal. You should see lines like:
[code]
1375291602.249 2235 172.16.42.1 TCP_MISS/200 861368 GET http://archive.ubuntu.com/ubuntu/pool/universe/o/openssl098/libssl0.9.8_0.9.8o-7ubuntu3.1_amd64.deb - DIRECT/91.189.92.176 application/x-debian-package
[/code]

Which means a cache miss, and a fetch from the real source (TCP_MISS). Or lines like this:
[code]
1375295650.284 4 172.16.42.137 TCP_MEM_HIT/200 202735 GET http://archive.ubuntu.com/ubuntu/pool/main/m/mpfr4/libmpfr4_3.1.0-3ubuntu3_amd64.deb - NONE/- application/x-debian-package
[/code]

Which means squid served the file from memory (no external network access).

That’s about it. Tomorrow we’ll get back into

Install Spotify on Linux Mint 15

Spotify’s instructions for setting up on ubuntu no longer work on mint 15. So one easy way to install it is to just download the deb directly from their apt repository. Just choose the correct version out of this directory (32 or 64 bit):
http://repository.spotify.com/pool/non-free/s/spotify/

Developing With Docker (Day 1)

TL;DR: Over this series of articles I’m going to try and build up and develop for node.js in a re-usable continer (vm) without polluting my host machine with various versions of node.js, mongodb, etc.

For a while I’ve been using vagrant and chef/puppet to develop in sandboxed environments without pulling dependency hell onto my host computer. In fact my host computers (laptop and desktop) are now pretty vanilla linux mint systems with editors and git installed. I actually version what I have on these boxes using puppet.

However, I have a few problems with vagrant for development. I don’t know if docker will solve these but we’ll see.

  • Host/Guest shared filesystems have lots of permissions issues on linux and even more on windows. The differences between usernames and root/non-root cause real issues when developing code on your host but building / generating on a VM.
  • Guest VM’s use up ram. This isn’t an issue on desktops now a days with > 24gb ram easily accessible. But it’s a big deal on laptops which are still averaging 8gb.
  • Not easy to checkpoint VMs or rebuild from a midway point. I may be missing something here with vagrant, but it’d be nice to have a good known base and restart from there.
  • Have to be online to get updates or build your box.

That said, I really like the reproduceable system/machine configurations that vagrant and chef/puppet allow you to have. I also really like being able to just checkout out a project and “vagrant up”, and have a sandboxed environment ready to go (with some caveats).

Getting Started

You should install the latest docker. Follow the instructions on the site. I’m not going to update this to keep in sync with docker (which is moving quickly).

Dockerfile

Coming from vagrant I expected some form of smart configuration file. Docker doesn’t really do that. I was also expecting redo-ability, like you get with puppet or chef. You don’t get that either. What you do get is a very simple command by command syntax which executes line by line. It’ll execute everything in the file every single time the build is run. So hopefully you’re a bashmaster.

Here’s the Dockerfile that I use to create a box with node.js installed on it:


from ubuntu:12.10

# update system and install dependencies

run apt-get update
run apt-get install -y python2.7 python build-essential wget

# get and build node

run cd /tmp && wget http://nodejs.org/dist/v0.10.15/node-v0.10.15.tar.gz
run cd /tmp && tar -xzvf /tmp/node-v0.10.15.tar.gz
run cd /tmp/node-v0.10.15 && ./configure --prefix=/usr && make && make install

So what does this do?


from ubuntu:12.10

This says start with the ubuntu:12.10 base image. This comes from the base docker boxes. For those familiar with vagrant, this is like vagrantbox.es.


run apt-get update

This runs “apt-get update” as root. In fact everything in the Dockerfile is run as root.


run apt-get install -y python2.7 python build-essential wget

This installs python 2.7 (and it’s symlinks), build-essential (make/gcc), and wget (used to pull node)


run cd /tmp && wget http://nodejs.org/dist/v0.10.15/node-v0.10.15.tar.gz

Download node into /tmp


run cd /tmp && tar -xzvf /tmp/node-v0.10.15.tar.gz

Extract node into /tmp


run cd /tmp/node-v0.10.15 && ./configure --prefix=/usr && make && make install

Build and install node.

Building The box container

Now we need to build this machine. You can clone this repository from https://github.com/gaffo/docker_nodebase, or you can just save the Dockerfile into a new directory.

To build we run:

docker build -t u1210_node .

In the directory containing the dockerfile. This will build a new container and if it succededs, it will tag it as u1210_node (so that we can use it later with a “from” directive).

. is actually the path to the directory containing the Dockerfile, so you could run this from anywhere. And containers are referenced by ID/tag not by path so context doesn’t matter at the moment. This will change when we try to share files with the container.

Using The Image

Not that we’ve built the image successfully, we can get a shell into it with

docker -i -t u1210_node /bin/bash

This will launch an interactive terminal (-i) allocating a pseudo tty (-t) with bash as the program (/bin/bash) into the container with the tag u1210_node.

Try it. You can play around all you want.

Next time I’m going to try and work up a Dockerfile iteratively with all the cruft I need to do app development with node.

Bulk ammending commit messages in git

At my job I’m using git-p4 to work locally with some rails code in git and push to perforce. It’s working okay but one issue for me is that we require every commit to perforce to have a code review by someone, and we put the reviewer’s name at the bottom of each commit. For example:

Live changes to histograms
-commonized the histograms views & logic

CR: JamesM

Well when I’m working in the git repo, I don’t know who is going to code review it, so I end up having to add CR: JamesM to several commits. It can be done with rebase -i, but it is several steps per commit. I could use git-notes, but that doesn’t follow the format that we like (it puts Notes: in). Because this is a local repo only, changing the commit history is not a big deal. After some searching I found the way:

git filter-branch --msg-filter 'cat && echo "CR: REVIEWER"' p4/master~1..HEAD

This little beauty will append CR: REVIEWER to all of the commits from the master to the current head (all of the local commits).

CloudFront Invalidation from Ruby

Since none of the examples that I could find on the internet of how to invalidate a cloudfront asset in ruby were correct, I decided to post my solution:

require 'rubygems' # may not be needed
require 'openssl'
require 'digest/sha1'
require 'net/https'
require 'base64'

class CloudfrontInvalidator
	
	def initialize(aws_account, aws_secret, distribution)
		@aws_account = aws_account
		@aws_secret = aws_secret
		@distribution = distribution
	end
	
	def invalidate(path)
		date = Time.now.strftime("%a, %d %b %Y %H:%M:%S %Z")
		digest = Base64.encode64(OpenSSL::HMAC.digest(OpenSSL::Digest::Digest.new('sha1'), @aws_secret, date)).strip
		uri = URI.parse("https://cloudfront.amazonaws.com/2010-08-01/distribution/#{@distribution}/invalidation")

		req = Net::HTTP::Post.new(uri.path)
		req.initialize_http_header({
		  'x-amz-date' => date,
		  'Content-Type' => 'text/xml',
		  'Authorization' => "AWS %s:%s" % [@aws_account, digest]
		})
		req.body = %|<InvalidationBatch><Path>#{path}</Path><CallerReference>SOMETHING_SPECIAL_#{Time.now.utc.to_i}</CallerReference></InvalidationBatch>|
		http = Net::HTTP.new(uri.host, uri.port)
		http.use_ssl = true
		http.verify_mode = OpenSSL::SSL::VERIFY_NONE
		res = http.request(req)
		
		# it was successful if response code was a 201
		return res.code == '201'
	end
end

Then just run it with:

puts CloudfrontInvalidator.new('ACCOUNT', 'SECRET', 'DISTRIBUTION').invalidate('PATH_TO_FILE')

Simplicity Itself

I just wanted to share what it took to draw a simple yellow triangle on a black background in OpenGL|ES. I hope it will give my Ruby friends and Haskell friends an aneurysm.

To show this:
Hello Triangle

I had to do this:

//The headers
#include <SDL/SDL.h>
#include <SDL/SDL_opengles.h>

//Screen attributes
const int SCREEN_WIDTH = 480;
const int SCREEN_HEIGHT = 320;
const int SCREEN_BPP = 32;

SDL_Event event;
GLuint programObject;


GLuint LoadShader ( GLenum type, const char *shaderSrc )
{
   GLuint shader;
   GLint compiled;

   // Create the shader object
   shader = glCreateShader ( type );

   if ( shader == 0 )
   	return 0;

   // Load the shader source
   glShaderSource ( shader, 1, &amp;amp;amp;amp;amp;shaderSrc, NULL );

   // Compile the shader
   glCompileShader ( shader );

   // Check the compile status
   glGetShaderiv ( shader, GL_COMPILE_STATUS, &amp;amp;amp;amp;amp;compiled );

   return shader;

}

bool init_GL() {

	const char* vShaderStr = "attribute vec4 vPosition;    \n"
		"void main()                  \n"
		"{                            \n"
		"   gl_Position = vPosition;  \n"
		"}                            \n";

	const char* fShaderStr = "precision mediump float;\n"
		"void main()                                  \n"
		"{                                            \n"
		"  gl_FragColor = vec4 ( 1.0, 1.0, 0.0, 1.0 );\n"
		"}                                            \n";

	GLuint vertexShader;
	GLuint fragmentShader;
	GLint linked;

	// Load the vertex/fragment shaders
	vertexShader = LoadShader(GL_VERTEX_SHADER, vShaderStr);
	fragmentShader = LoadShader(GL_FRAGMENT_SHADER, fShaderStr);

	// Create the program object
	programObject = glCreateProgram();

	if (programObject == 0)
		return 0;

	glAttachShader(programObject, vertexShader);
	glAttachShader(programObject, fragmentShader);

	// Bind vPosition to attribute 0
	glBindAttribLocation(programObject, 0, "vPosition");

	// Link the program
	glLinkProgram(programObject);

	// Check the link status
	glGetProgramiv(programObject, GL_LINK_STATUS, &amp;amp;amp;amp;amp;linked);


	glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
	return true;
}

bool init() {
	//Initialize SDL
	if (SDL_Init(SDL_INIT_EVERYTHING) < 0) {
//		return false;
	}

	//Create Window
	if (SDL_SetVideoMode(SCREEN_WIDTH, SCREEN_HEIGHT, SCREEN_BPP, SDL_OPENGL)
			== NULL) {
		return false;
	}

	//Initialize OpenGL
	if (init_GL() == false) {
		return false;
	}

	//Set caption
	SDL_WM_SetCaption("OpenGL Test", NULL);

	return true;
}

void clean_up() {
	//Quit SDL
	SDL_Quit();
}

void Draw() {
	GLfloat vVertices[] = { 0.0f, 0.5f, 0.0f, -0.5f, -0.5f, 0.0f, 0.5f, -0.5f,
			0.0f };

	// Set the viewport
	glViewport(0, 0, SCREEN_WIDTH, SCREEN_HEIGHT);

	// Clear the color buffer
	glClear(GL_COLOR_BUFFER_BIT);

	// Use the program object
	glUseProgram(programObject);

	// Load the vertex data
	glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, vVertices);
	glEnableVertexAttribArray(0);

	glDrawArrays(GL_TRIANGLES, 0, 3);
}

int main(int argc, char *argv[]) {
	//Quit flag
	bool quit = false;

	//Initialize
	if (init() == false) {
		return 1;
	}

	//Wait for user exit
	while (quit == false) {

		//While there are events to handle
		while (SDL_PollEvent(&amp;amp;amp;amp;amp;event)) {
			//Handle key presses

			if (event.type == SDL_QUIT) {
				quit = true;
			}
		}
		Draw();
		SDL_GL_SwapBuffers();
	}

	//Clean up
	clean_up();

	return 0;
}

You will all be happy to know that one of the first things I did (after re-learning make) was dive into google test and get a testing framework working. My last several projects have been with Ruby and Java, and in my spare time I have been building Palm Pre apps in Javascript and messing around with Erlang.

So I thought it was funny that many of my friends have been going to more and more abstract languages, while I, oddly, have decided to go lower. I have been playing around with doing OpenGL|ES and C++ lately in my spare time, hopefully in prep of the release of the Native SDK (PDK) for the Palm Pre.

updating embedded jruby gems with ant

Recently I have been using cuke4duke on a java project (which I’ll discuss in a later article). We have jruby running from jruby-lib/jruby-complete.jar. Our gems are embedded in the project with the GEM_HOME/GEM_PATH being set to jruby-lib/gems. All of this is under source control. We don’t have jruby installed at all on the system, it is only in this project. I’ve mainly been the one on the team (of 4 total) who has been maintaining the jruby stuff as none of the other devs have jruby experience.

So one of the problems I had recently was how to update the gems that are checked into source. One of the other devs wanted to use multi-line strings in cucumber and found that it didn’t work with cuke4duke until cucumber-0.4.4. We had 0.4.2. So what I needed was an easy way for the other devs to be able to update the gems without relying on jruby being “installed” on the machine. Since these are java guys, ant seemed the best solution.

Here is the ant task I used:

<path id="jruby.classpath">
	<fileset dir="jruby-lib">
		<include name="**/*.jar" />
		<exclude name="gems/*" />
	</fileset>
</path>
<target name="update.gems" description="update the installed gems">
	<!-- this updates the gems on the system -->
	<java classname="org.jruby.Main" fork="true" failonerror="true">
		<classpath refid="jruby.classpath" />
		<env key="GEM_PATH" value="jruby-lib/gems" />
		<env key="GEM_HOME" value="jruby-lib/gems" />
		<arg value="-S" />
		<arg value="gem" />
		<arg value="update" />
	</java>
	<!-- this removes any obsoleted / previous version of all gems -->
	<java classname="org.jruby.Main" fork="true" failonerror="true">
		<classpath refid="jruby.classpath" />
		<env key="GEM_PATH" value="jruby-lib/gems" />
		<env key="GEM_HOME" value="jruby-lib/gems" />
		<arg value="-S" />
		<arg value="gem" />
		<arg value="cleanup" />
	</java>
</target>

Testing WebOS Applications Made Easy with jasmine_webos gem

TDD for WebOS applications is still in its early stages, but the guys over at Pivotal Labs have made some great strides in the low level tooling. The Jasmine javascript testing framework provides a dom-less testing implementation which works well in the MVC environment of a WebOS application.

Pivotal is also hard at work at Pockets (not yet released) which provides on emulator testing and integration of Jasmine into your WebOS application. However, as of now (2009/09/21), this has not been released owing to major changes in the debuging environment in the WebOS SDK.

To help with the transition, I have released my initial version of jasmine_webos, which facilitates testing your webos application with Jasmine. Jasmine_webos requires ruby and rubygems as well as the json and thin gems. To install jasmine_webos, simply do:

sudo gem source -a http://gemcutter.org
sudo gem install jasmine_webos

Jasmine_webos provides a generator to create the directories it expects (spec/javascript/spec and spec/javascript/matchers) as well as an example spec so that you can make sure that it is working. This is accessed by running the following from the root of your WebOS Application:
jasmine_webos -g

You can then run the test server (dynamically builds up suites for testing) with:
jasmine_webos -s. You can then run your tests by hitting http://localhost:8888 with any capable browser.

Jasmine_webos server will include any javascript files in your app directory, all matchers in your spec/javascript/matchers, and all tests in your spec/javascript/spec folder. Jasmine_webos keeps the jasmine files contained in the gem so that as new features for jasmine are released, you can get easy access to them by updating the gem. This also keeps you from having to copy jasmine into each of your apps.

In the future I am looking at implementing:
* A config file for additional directories and requirements
* celerity for command line testing / integrated builds
* including mojo framework libraries for fuller stack testing.

Please report any bugs to the github bugs page.

Announcing Mainline

Mainline is a rails plugin which exposes your rails app via webrick to allow
testing with browser automators such as Selenium or Watir. Mainline allows
your rails actions to run in the same transaction as your unit tests so you
can use fixtures, factories, or whatever.

Basically you can now test selenium in your same transaction and don’t have to worry about rolling back your fixtures or factories.

Grab it from Github
Bug Reports at Lighthouse
Docs at RDocul.us

Making your Plugin or Gem configurable

Recently I added a configuration mechanism to Webrat. It was surprisingly easy, and mainly copied from rails core. I would suggest adding somthing like this to any plugin that has more than a few features or ones that users have asked to have turned off.

First off you’re going to have to create the actual configuration object. There are a few good ways to do this. One is to use a config module, another is to create a configuration object that is accessible via a singleton method.

I’m going to go with the second one, a configuration object.

Toss this one in lib/configuration.rb (A simplification of Code | RDoc)

module Plugin
  
  # Configures Plugin.
  def self.configure(configuration = Plugin::Configuration.new)
    yield configuration if block_given?
    @@configuration = configuration
  end
      
  def self.configuration # :nodoc:
    @@configuration ||= Plugin::Configuration.new
  end

  # Plugin can be configured using the Plugin.configure method. For example:
  # 
  #   Plugin.configure do |config|
  #     config.show_whiny_errors = false
  #   end
  class Configuration
    
    # Should whiny error messages be shown?
    attr_writer :show_whiny_errors

    def initialize # :nodoc:
      # set your defaults in here
      self.show_whiny_errors = true
      # put as much as you want in here
    end
    
    # some syntactic sugar for you, the coder
    def show_whiny_errors? #:nodoc:
      @show_whiny_erorrs ? true : false
    end
   
  end
  
end

Okay, now we need to test the config object itself. This is why it’s nice to make an object just to house the config, it’s easy to test. What do we test? Well defaults and accessors for two!

(The following lifted from Code) (sorry this is in rspec, it’s not hard to do in test::unit)

require File.expand_path(File.dirname(__FILE__) + '/../../spec_helper')
 
describe Plugin::Configuration do
  # define matchers for testing
  predicate_matchers[:show_whiny_errors] = :show_whiny_errors?

  it 'should show whiny errors by default' do
    config = Plugin::Configuration.new
    config.should show_whiny_errors?
  end
  
  it 'should be configurable with a block' do
    Plugin.configure do |config|
      config.show_whiny_errors = false
    end
    
    config = Plugin.configuration
    config.should_not show_whiny_errors?
  end
  
end

Now we need to do some stuff to make it nicer for our other users to test. Put the following in your test_helper or spec helper. It will allow you to clear your config after each test. Nice to have to to avoid messy test issues.

(The following lifted from Code)

module Plugin
  @@previous_config = nil
 
  def self.cache_config_for_test
    @@previous_config = Plugin.configuration.clone
  end
 
  def self.reset_for_test
    @@configuration = @@previous_config if @@previous_config
  end
end

# configure your test runner / spec runner to always clear the config
Spec::Runner.configure do |config|
  
  config.before :each do
    Plugin.cache_config_for_test
  end
  
  config.after :each do
    Plugin.reset_for_test
  end
end

This last bit is somewhat hard to do in test::unit as it is harder to hook into the setup (you only get one in the call chain). I’m willing to take some help on cleaning this one up for test::unit. Currently I have just been putting it in the setup / teardown for each test file.

Finally you need to make use of this in tests, fortunately that is quite easy. Just:

describe SomeObject do
    it 'it shouldn't do it when whiny nils are off' do
      Plugin.configure do |config|
        config.show_whiny_errors = false
      end
      object.should_not_receive(:log)
  
      object.do_somthing_that_usually_complains
    end
end

Finally, anywhere in your plugin that you think somthing is whiny, just check the config before using it like this:

log(&quot;you should really fix this&quot;) if Plugin.configuration.show_whiny_errors?