<![CDATA[Yak hair surplus]]>http://yakhairsurplus.comNodeJS RSS ModuleFri, 23 Oct 2020 20:13:53 GMT60<![CDATA[Complement and partial in javascript]]>I started learning Clojure last week using the online book "Clojure for the brave and true". (If you are doing the same and having trouble with getting the REPL working in Emacs then see this post on the Google group.)

Clojure is the first functional programming language that I have learnt. While I have found it fun so far I haven't yet had an "aha" moment about functional programming. When the author is excited about something, I quite often think, "Well, you could do that in Ruby." I'm not a Ruby fanboy, it's just that Ruby is the language that has come to mind most while I learn Clojure.

I've just read about the Clojure functions complement and partial. These felt like the first things that were really new to me. I decided to implement these in a language that I had more experience of in order to "pinch" myself. I think I have acclimatised so quickly to Clojure that I'm in danger of taking any awesomeness for granted.

I've written my implementations of complement and partial in javascript. It just seemed like a good idea at the time. I used the Rhino runtime as a kind of syntax checker before running my samples in a browser. The version of Rhino on my distro does not seem to recognise the javascript spread syntax, so I have not used that.


The complement function is pretty easy to implment. All that is required is to create a function that will call the supplied function and then apply a boolean not to the result. Here's my version:

function complement(fn) {
    return function() {
        return !fn.apply(null, arguments)

We can use the complement function very simply, although I quickly realised that we can't always use it as simply as in Clojure. In Clojure, operators like > and + are conventional functions and can be passed as an argument just like any other function. Not so in javascript. I have defined a very simple function to take the place of the > operator.

function greater(a,b) {return a>b}

And can use it with complement to create a new function like this:

// less-than-or-equal
var lte = complement(greater);

Here is the created function in use:

>> lte(4,3)
← false
>> lte(4,4)
← true
>> lte(4,5)
← true


The partial function is only a little more complex than complement. We need to create a function that will call the supplied function with the arguments supplied when the partial function is created and the arguments supplied when the partial function is called:

function partial(fn) {
    // Collect all the arguments other than "fn"
      var partial_args = [].slice.call(arguments, 1)

      return function() {
        var all_args = partial_args.concat([].slice.call(arguments))
        return fn.apply(null, all_args)

And this is how partial is used:

// less-than-3
var lt3 = partial(greater, 3);

>> lt3(2)
← true
>> lt3(3)
← false
>> lt3(4)
← false

And, of course, we can combine partial and complement easily because they both accept and return functions:

// greater-than-or-equal-to-3
var gte3 = complement(partial(greater,3));

>> gte3(2)
← false
>> gte3(3)
← true
>> gte3(4)
← true

http://yakhairsurplus.com/complement-and-partial-in-javascript/e46de744-4b4c-4a6e-a9f0-a923a9103ea9Mon, 26 Jun 2017 15:39:28 GMT
<![CDATA[Light Table: Edit file in split window]]>This is a handy but not necessarily obvious command in Light Table:

File: Open another view of the current file

It is especially useful when used with:

Tab: Move tab to new tabset

I failed to find "Open another view" through many minutes of using Google search for two reasons. First, I was looking for...

  • light table split window
  • light table split editor
  • light table edit same file in two editors
  • light table edit same file side by side

...and various other things that would probably convince a Light Table aficionado that I was not yet acculturated in that community.

The closest I got was...

  • light table edit same file in two tabs

...which showed me a github issue requesting that feature.

The second reason is that even if you knew in advance what the command was called, you would be unlikely to find it through Google. If you search for...

  • "light table" "open another view of the current file"

...you will get no documents returned (2015-07-17) and leaving off the quotes gets you a github issue that mentions the command (in the context of wanting it to work better) and nothing that looks like documentation of that command.

I think the reasons that I did not find the command easily through Light Table's very impressive command menu are (1) that I am impatient (see what typing file gets you. It's not a long list of commands to read) and (2) the command name does not include the word "same".

Is this a rant? No. It's just an attempt to leave lots of relevant keywords on a page that starts with the required command.

http://yakhairsurplus.com/light-table-split-window-view-of-a-file/5292fbb9-a1b4-43cd-8d4c-77212c6911caFri, 17 Jul 2015 11:02:33 GMT
<![CDATA[Google spreadsheet, jsapi Datatable, header row]]>Embedded in google's visualization jsapi is a simple way to grab data from a google spreadsheet. The data is captured in the form of a DataTable. Google's dev docs mention this in the context of using their charting/visualisation library but it is such a simple way to maintain and read tabular data that I use it for other things too.

The first time you do this you may (or may not) find that the data table has used the first row of your spreadsheet as column labels. If the first row is used like this then it will be absent from the rows of data available through the table interface.

The glitch...

I changed the type of data used in one of my spreadsheet columns from dates to strings. Once I did this, every column was then entirely composed of strings. At this point, my header-row ceased to be used as column labels and became the first row of data instead. Once I untangled this from the other changes I had made, I realised that having at least one non-string column in the spreadsheet was enough to convince the API to use the header-row (which is composed entirely of strings) as column labels again.


If you want more control in accessing a google spreadsheet, you might consider using the google sheets REST API instead.

http://yakhairsurplus.com/google-spreadsheet-jsapi-datatable-header-row/1e08fc95-311a-4327-a411-f129d064ff82Mon, 15 Jun 2015 14:42:08 GMT
<![CDATA[Silent failure to catch exceptions in PHP]]>Just a quick post to mention an annoying PHP gotcha and the solution. (I'm using PHP 5.5.)

If you write code that is not in the root namespace, and if you write a try-catch block, the catch block will never execute unless you write a use statement to bring the PHP Exception class in to scope.

In other words this code will never catch any exceptions and it will never generate an appropriate runtime error like "class Exception not found":

namespace MyApp;

try {
    // some stuff
catch ( Exception $e ) {
    // why is no one visiting me?  :(

But this code will:

namespace MyApp;

use Exception;

try {
    // some stuff
catch ( Exception $e ) {
    // It's like a revolving door in here!  :)
http://yakhairsurplus.com/silent-filure-to-catch-exceptions-in-php/73c73d53-c56b-4fa3-a066-0f11b402691cFri, 20 Feb 2015 15:27:03 GMT
<![CDATA[How I learn stuff]]>Ashley McNamara wrote a blog post in which she described an almost pointless coding course that she attended and paid a lot of money for. I had a similar experience a long time ago. I'm writing this post because Ashley asked me for my 'opinion on what the “right” way to go about learning software development is'.

I think my focus is on the introvert side of things. I'm sure someone else would talk about going to hack days and hanging out on IRC channels. Here's what I did...

My Background

In secondary school I didn't do any more than I was asked to do and the (B) grades rolled in so I figured I was doing okay. That attitude didn't stand me in such good stead on my bachelors degree, but I still managed to earn an honours degree in Chemistry and felt like I was ahead. I'd been brought up with a middle class belief in the power of qualifications and, though I only had a vague idea of how to turn them in to a living, having those qualifications was a reassuring safety blanket.

After a year on the dole, I lucked out and got a state-funded place on a taught masters degree in computer science. I was over the moon because this was what I'd wanted to do all along. My father, with great foresight, had sent me on a BASIC programming evening class when I was ten years old and I fell in love with the power of computer programming. (How I failed to get on to a CS bachelors degree is an amusing and embarrassing tale for another time.)

The masters course turned out to be a money spinner for a group of CS lecturers. They were supposedly teaching a course that would get people off the dole, but that would have required practical, vocational skills. What they actually taught were their individual specialisms. This ignored both the needs of the job market and whether a student could hope to get a useful amount of information in any individual module. As I was still a great believer in Qualifications, I didn't rock the boat. My older and much more practical course-mates were horrified and wished all that government money had been spent on a computer and a score of carefully chosen textbooks each.

Buy book, code examples, go build something

I still didn't have a job and, as I was waiting for one to magically appear, I decided to teach myself C++. There was a chance it might be more widely used than the Eiffel that I'd learnt on my masters course.

This fit a template for how I had and would learn different technologies: Buy a thick, approachable book that built chapter-on-chapter through code examples and just code every example, answer every question. My only refinements to this system (when I got more used to that internet thingamabob) were to make sure the book was well-reviewed and (for perverse languages like C++) find a supplementary book that concentrated on the gotchas.

I never completely finished these books. I just worked through them until I got bored. When I was younger I would give myself a hard time for doing this but I've since realised that I was bored because I'd already learnt what I needed in order to build the things I cared about. Working through more examples was keeping me from the enjoyment of building those things.

Here are the example/tutorial based books I used over the years:

When it doesn't work

This may sound like a convenient, repeatable system but there are times when this has not worked for me. For all the books mentioned above I succeeded in my aim. By the time I abandoned the book I had a much better grasp of the technology in question, and I put it in to practice immediately.

The major gotcha here is Discipline.

Working through a book this way takes discipline and I lack that discipline under certain circumstances:

  • I find tactically learning new technologies e.g. to generally improve my programming skills or my resume, utterly futile. Without the mind-focussing effect of a target application or job I get nowhere.

  • I get very impatient with this style of learning if I'm already fairly familiar the technology in question. I program in PHP and Ruby at the moment but I've learnt both of these "on the job" by modifying existing systems. I know there are holes in my knowledge but I plug them in an incremental way as I work. Trying to work through a thorough book on one of these languages is a recipe for huge frustration.

Is that it?

That's not the be-all and end-all of course. I'm sure I've learnt more while coding applications than while coding examples. I think the advantage of the book-as-programming-course approach is that my foundation in any given technology is as thorough and complete as the book is. But I have to use that knowledge immediately and repeatedly in order to lock it in. My C++ programming career lasted 9 years. I'm sure I could pick up any of the apps I wrote then and feel up to speed again in a day. Well, maybe a week. On the other hand, my C# / WPF career lasted 3 months and I'm sure that on coming face to face with the system I worked on then I would break out in a cold sweat.

http://yakhairsurplus.com/how-i-learn-stuff/c6c92fa9-0384-4a61-b1ac-f783bd11dc86Wed, 14 May 2014 22:09:23 GMT
<![CDATA[How to secure PhpMyAdmin on your local network]]>If you're developing web applications on a *AMP stack then you may well have PhpMyAdmin installed, even if you're not using it to manage your databases. Unlike your deployed websites, your local ones are not advertising their existence, but they may still contain sensitive data. When did you last take an SQL dump from a production website to debug on your development machine?

How secure is your local MySQL server?

Let's test how accessible that data is. First, get the IP address of your computer on the local network. I'm on linux so I did that by running the ifconfig program. Armed with that IP address, you can then try using it from another computer, but you don't have to.

I tried connecting to the local MySQL server as the root user through the command line:

$ mysql -uroot -p
Enter password: 
Welcome to the MySQL monitor...

Yep, that worked. Then I tried it through the "loopback" interface:

$ mysql -h127.0.0.1 -uroot -p
Enter password: 
Welcome to the MySQL monitor...

Yep, that worked too. Finally I tried it through my computer's IP address on the local network ( in this example):

$ mysql -h192.168.0.1 -uroot -p
Enter password: 
ERROR 2003 (HY000): Can't connect to MySQL server on '' (111)

Nope. It looks like MySql is secured against remote login on my computer. Good.

How secure is your local PhpMyAdmin?

Unlike database servers, web servers are generally intended to be visible to the world. That means that remote login to your MySql server may still be possible. You are probably used to pointing your web browser at:


Or, if you're more of a numbers person:

But try again using your IP address on the local network and you'll probably get exactly the same result:

I'm on a large network shared with other business units so I wasn't keen on this behaviour. I might rarely use my laptop in a public network and I definitely wouldn't want my database management login screen to be accessible then.

I very quickly found a solution on the Ubuntu forums and modified the Apache configuration for PhpMyAdmin. On my Ubuntu based computer, that's:


Here I added the emboldened lines:

    <Directory /usr/share/phpmyadmin>
        Order Deny,Allow
        Deny from All
        Allow from
        Options Indexes FollowSymLinks
        DirectoryIndex index.php

After reloading my Apache configuration, navigating to gives me a very satisfying 403 error.


Edit: I recently upgraded to Apache 2.4, which meant that I needed to change the config file. It now looks like this:

    <Directory /usr/share/phpmyadmin>
        Require ip
        Options Indexes FollowSymLinks
        DirectoryIndex index.php
http://yakhairsurplus.com/how-to-secure-phpmyadmin-on-your-local-network/331c6722-a089-4cc0-85bc-5537d6cfdc64Tue, 15 Apr 2014 14:46:32 GMT
<![CDATA[New year, new blog]]>Okay, it's not a very new year, however I am feeling proud of my new blog. As both the people who saw the old blog will know, that one was running on Drupal. This wee beasty is running on Ghost.

For anyone who hasn't heard already, Ghost is a blogging platform written by a team that splintered from Wordpress. Ghost, however, is not a PHP CMS, it is a pure blogging platform and runs on node.js. Pretty cool, huh?

I decided to use Ghost out of pure laziness. I found out about it because Webfaction, my web host, added a one-click installer and when I saw the content-entry interface I decided I was on to a winner. Drupal is a very general-purpose tool which means that it needs a fair amount of configuring, even for a simple bog site, and I never finished that process. Ghost on the other hand was both pretty and very usable straight out of the box.

As easy as that?

Well, my lazy side was disappointed, but I still enjoyed the experience (and I got to play with node.js) so I reckon I'm ahead.

Ghost lacks a few "necessary" bloggy tools, e.g. site search, email subscription and maybe (I haven't looked) a tag cloud. As you can see at the bottom of this post, or on the home page sidebar, I've added email subscriptions using Google's Feedburner.

On Drupal, I would have written any new functionality in to a module. I would have exposed the UI element as a block, and I would have tidied it up in the theme. Ghost feels so small that I added the email subscription straight into the theme layer without any nagging feeling of impropriety. Obviously that will entail more work if I ever change my theme.

And the benefits?

I won't harp on about ghost. You can easily find out about why you might like it. I will mention that if you want to install it yourself then you can download releases from Github.

The things that appealed to me about Ghost were that it looked good, and came with markdown, a lovely text editor and a very simple post-management interface. Blogging is one of those areas where I feel that the product is well understood. I don't want lots of choice (i.e. configuration.) I want a highly usable tool with sensible defaults. I got it.

What about the yak?

Ah yes! The Yak was drawn for me by Sharon Cumming. Sharon's an artist who has only just started to sell her work and doesn't have an online presence yet. If you'd like to commision her then let me know and I'll put you in touch.

http://yakhairsurplus.com/new-year-new-blog/006992c9-a4a3-4cf9-a396-0ab868689155Thu, 20 Feb 2014 13:48:38 GMT
<![CDATA[Phpmyadmin installation bug on Linux Mint 13 (Maya)]]>Just a quickie: If you install phpmyadmin with apache on Linux Mint Maya then you may get a 404 when you navigate to localhost/phpmyadmin. I did.

The solution is to add a symlink from apache's configuration directory to the phpmyadmin apache conf file, like this

cd /etc/apache2/conf.d/
sudo ln -s /etc/phpmyadmin/apache.conf
http://yakhairsurplus.com/phpmyadmin-installation-bug-linux-mint-13-maya/2319d268-81f6-4c2e-9efc-294f7f05c487Wed, 23 Oct 2013 11:00:00 GMT
<![CDATA[Abstract classes in Ruby - losing the security blanket]]>I recently built a Sinatra app and used an object-relational mapping library (ORM) to persist my models. Since deploying the app, I have become concerned that the use of public methods added by the ORM to my model classes has bled in to my controllers. I've therefore created a dependency on this specific ORM throughout my code.

My initial reflex on my next project was to create an abstract class to represent a storage system and a descendant concrete class to use whatever ORM I chose today. My model classes would serve as input and output for concrete storage class and they would have no knowledge at all about ORMs nor about my own storage system.

Having decided that this was my preferred approach, I searched online for `ruby abstract class` and rapidly came to realise that this was not actually a feature of Ruby. I'm not the only person who has looked for this information. Stack Overflow has an impressive amount of activity on this topic.

What is an abstract class for?

Abstract classes are a tool to aid developers to "program to an interface", as the Gang of Four would put it. By eliminating any mention of concrete classes in client code, programmers are able to "swap out" the concrete class being used without rewriting that client code. To many programmers this is self-evidently A Good Thing. In many languages (my experience is in C-family languages) the compiler will not permit an abstract class to be instantiated. This provides a check at compile time that the programmer has implemented all the inherited abstract methods in their concrete class. The abstract class therefore serves as a contract to provide a certain set of methods.

Why doesn't Ruby have abstract classes?

The implication in my description of the purpose of an abstract class is that some aspect of the client code refers to the type of the objects that the client code is handling. Not so in Ruby where there are no variable type declarations. If a programmer decides to swap out their "concrete" class for any other class then this will not affect any client code so long as the replacement class implements public methods with compatible signatures to those of the original class. This is the reality of a duck-typed language.

Many of the stack exchange answers on this topic point out that Ruby is extraordinarily dynamic. Methods can be added to individual objects at run time, meaning that you can't rule out a given object's ability to respond to a given message until the point at which that message is sent to the object. An argument can be made that if a method's implementation can be delayed until just before the method is called then a mechanism for determining the completeness of an object's interface before this point is redundant. Of course, many of us rarely use such a high level of dynamism.

Would implementing abstract classes in Ruby be useful?

You might be able to implement something to warn you at object instantiation if the new object fails to implement an interface defined by a superclass. (If you actually call a non-existent method then Ruby will raise an exception.) The pertinent question is whether an abstract class would help you to program to an interface. A contract to honour a given interface sounds great but in the absence of variable type declarations I don't see what role this contract has. I'm still trying to get comfortable with this fact. Almost all my programming career has been spent working with explicitly typed languages and I'm finding it hard to let go of the security blanket of abstract types (which is why I wrote this post.)

So what should I use instead of abstract classes?

How about tests? It's often desirable for the public interface of a replacement class to behave the same way as that of the original class, e.g. for persistence classes talking to different data stores. Tests in this scenario should be directly transferable to the new class except, for example, if you are using mocks to verify messages sent by the objects under test. Sometimes we do want different public behaviour, e.g. when interfacing to strategy classes that encapsulate different algorithms but (in order to be a direct swap-out) the replacement class will still need to respond to the same method signatures, meaning that the existing tests will at least serve as a rough scaffold.

Edit: Having experimented with the approach I proposed to take with my new app, I decided I would be recoding too much of the ORM functionality. Instead I found I was able to find all the methods included in my models from the ORM module and make them private. Now my model classes can use all the functionality of the included ORM but my controllers are safe from evolving unwanted dependencies.

Edit to the edit: I hadn't fully understood one aspect of Ruby method access control: Class methods are unable to call protected (or private) methods on instances of the class. This defeats my attempt to hide the ORM functionality because it is implemented in a mixture of class and instance methods. Now I have to choose whether to implement my original idea or just cease to worry about this problem altogether. (I have to admit that no one else in the Ruby developement world seems to be worrying about it.)

http://yakhairsurplus.com/abstract-classes-ruby-losing-security-blanket/ecd56795-3eda-4bcf-859b-2e225f233de5Sun, 06 Oct 2013 11:00:00 GMT
<![CDATA[How to build a "private" website in Drupal]]>Drupal lets you build certain kinds of website very quickly. That's why it's in my toolset. Recently I was asked to do a site that was clearly a candidate for Organic Groups so I duly fired up drush and started building.

The website in question was a members-only website. Non-members would only be able to see a single page. The non-member-accessible page would have login and register forms and some other things. I'd built another site with similar requirements before. The previous site had no need to look pretty (it was functionally an intranet) so I just left it showing "Access denied" and the login form to all anonymous visitors.

This time, that would not be acceptable. The site needed to look professional to potential users and that meant a proper home page for anonymous users. I was surprised at how unintuitive this setup was.

My first attempt at a solution was to look at access control modules. Drupal ninjas please correct me if I've got this wrong, but this generally required giving anon users the core permission to see published content and then removing access to specific content types (or sets of content defined in other ways) through a contributed module. If any module in Drupal grants you a permission then no other can take it away but, under the hood, these contributed modules are defining new permissions (or permission-like things) and denying access if either the core system or their own system would dictate that.

This situation surprised me and mildly complicated the administration of the site. Unless you can make a new content types private by default then this situation needs careful attention because new content-types will be public and nodes of those types will be accessible to anonymous users until settings are changed. How often will this happen? Well, probably not often, but it still made me uncomfortable.

I posted in the Drupal forums about this issue, asking for solutions. I was somewhat reassured to realise that there was no "standard" solution to this and that my google-fu was not completely broken. WordFallz drew my attention to the Private module which would have been a pretty good solution. It adds some very simple UI to the "Publishing options" section of nodes and content-types, allowing the default privacy to be set. (Access to private content is an extra permission.) I'd already implemented a solution by the time I saw this, otherwise I would have defined an "Anon home page" content type to be public while everything else was private. I like the Private module a lot and have already used it on another site.

The solution I actually used was simple and complicated:

  1. I defined a custom page in code (using hook_menu() ) to be the logged-out home page.
  2. The logged-in home page was a regular node.
  3. I set the site default front page to be my custom logged-out home page.
  4. I used the Front page module to select the logged-in home page for authenticated users. Note: The Front page project's identifier is front (while the module name is front_page, *sigh*), not to be confused with the Frontpage module!

This is pretty simple because there are, after all, only four steps to do this. It's more complicated than it needs to be however because the logged-out home page is defined in code and you need to jump through a hoop or two if (like me) you need to have some user-editable content on that page.

Edit: Using a module that unconditionally redirects a user after login has a problem: The reset password workflow is broken. There is an unresolved issue about this for the Front page module. It surprised me a bit that (1) it took this long for me to get a report from a user about this problem and (2) that a module that does not allow for this could get installed on 19,000 sites.

http://yakhairsurplus.com/how-build-private-website-drupal/08dafd1f-a6fe-4d1c-bac6-a1591737ee95Mon, 23 Sep 2013 11:00:00 GMT
<![CDATA[Growing object-oriented software in JRuby - Part 1]]>As part of my constant quest to overcome the feeling of, "There has to be a better way to do this," I've been reading Growing Object-Oriented Software, Guided by Tests.

One of the things that the authors endorse, and which now seems so obvious that I can't believe I've never done it, is to start development with an end-to-end test. They define end-to-end as including the whole deployment pipeline and all the services that the application is expected to use. This forces the developers to integrate with a lot of different systems. The upside is that this shakes out a whole bunch of problems very early on. The downside is that the developers may experience a lot of pain before they've shipped a single feature. Still, that's better than experiencing all that pain at the end of a delivery cycle with a looming deadline.

With that in mind, I started trying to implement their non-trivial worked example while working through the book. Their code is in Java. I've worked in a bunch of C family languages and I don't enjoy it very much (they're very verbose) so I thought I'd try implementing the example in Ruby instead.

The example is a Desktop application. I've never developed a desktop application in Ruby and I wondered how to drive the application during automated tests. Ruby is well served by Capybara for web application testing but I wasn't aware of a desktop GUI equivalent. I started to research this, realised that the majority of tools might be based on functionality missing from my desktop environment, and decided pretty quickly to back-pedal.

At this point, I was tempted to resort to using Java after all. Any dislike of Java on my part is not a huge deal and in the past I've written some trivial Java programs. On reflection though, I decided that I was going through the "front-loaded pain" associated with constructing my end-to-end test and gave myself a break. Instead of using Java I had a look at JRuby. JRuby runs on the JVM and should give me access to the same GUI driver, WindowLicker, that the authors were using. Hey, why not?

This turned out to be non-trivial partly because of some poor error messages on JRuby's part, and partly because of my ingrained habit of letting tests and/or compilers tell me what's wrong rather than activating my under-used initiative...

WindowLicker requires another library called Hamcrest. Apparently Hamcrest is useful to Java programmers for implementing "matchers". The examples I've seen would be pretty trivial in Ruby so I haven't had enough stimulus to properly try grokking this library yet. Anyway, I hadn't downloaded Hamcrest when I first tried to use WindowLicker and rather than a useful error message ("Hey buddy, you're missing a dependency!") I got this...

NameError: missing class or uppercase package name (`com.objogate.wl.swing.driver.JFrameDriver')
get_proxy_or_package_under_package at org/jruby/javasupport/JavaUtilities.java:54
                  method_missing at file:/home/justin/.rvm/rubies/jruby-1.7.3/lib/jruby.jar!/jruby/java/java_package_module_template.rb:14
                          (root) at /home/justin/code/goos/app/application_runner.rb:7
                         require at org/jruby/RubyKernel.java:1027
                          (root) at file:/home/justin/.rvm/rubies/jruby-1.7.3/lib/jruby.jar!/jruby/kernel19/kernel.rb:1
                require_relative at file:/home/justin/.rvm/rubies/jruby-1.7.3/lib/jruby.jar!/jruby/kernel19/kernel.rb:19
                            load at org/jruby/RubyKernel.java:1046
                          (root) at /home/justin/code/goos/tests/end-to-end.spec:1
                            each at org/jruby/RubyArray.java:1613
                          (root) at /home/justin/.rvm/gems/jruby-1.7.3@goos/gems/rspec-core-2.14.5/lib/rspec/core/configuration.rb:1
                 load_spec_files at /home/justin/.rvm/gems/jruby-1.7.3@goos/gems/rspec-core-2.14.5/lib/rspec/core/configuration.rb:896
                 load_spec_files at /home/justin/.rvm/gems/jruby-1.7.3@goos/gems/rspec-core-2.14.5/lib/rspec/core/configuration.rb:896
                             run at /home/justin/.rvm/gems/jruby-1.7.3@goos/gems/rspec-core-2.14.5/lib/rspec/core/command_line.rb:22

Now that I've solved that, I'll get on with the rest of my end-to-end test...

http://yakhairsurplus.com/growing-object-oriented-software-jruby-part-1/2aca3ec5-10e6-4c14-91f3-b06fe2543f94Mon, 09 Sep 2013 11:00:00 GMT
<![CDATA[Ubunutu / Xubuntu / Mint 12.04 LTS, and then MATE]]>I love Ubuntu. It was the first Linux distro that I really stuck at. That's partly because my use of Ubuntu coincided with my departure from programming for Windows systems, and partly because I found the first versions of Ubuntu that I used (9.04, 10.04 LTS) to be very intuitive with sensible defaults.

I was a bit disheartened when Ubuntu's default desktop environment switched from Gnome to Unity. I'd really settled in to Gnome and my desktop worked almost exactly as I wanted. (My previous use of Windows had set my expectations to be high in certain areas.) Ah well, I'm not one to shy away from a new interface so I thought I'd give it a go anyway when 10.04 became unsupported.

One big niggle hit me right away and that was support for my hardware. I use two laptops (work and home) and both are plugged in to an external monitor. One of those monitors is a CRT I keep for photo-editing. Unity has no refresh-rate selection in its display dialog which led me to manually edit a config file. ( That's ~/.config/monitors.xml in case you need it.) That worked okay but sometimes the setting got lost when waking from standby, which forced me to log out and in again.

There was a time when I would have just put up with this behaviour or searched the net for a more robust solution (I love Ubuntu's forum and other Linux forums) but, dammit, I had such an easy time with 10.04 that I guess I was spoilt, so I went looking for a better OOTB 12.04 experience instead. Bear in mind that I was installing these distros and desktops on the same hardware that I'd been using with 10.04 for 3 years. I tried:

  • Gnome Classic on Ubuntu (couldn't even get the desktop to use two whole monitors)
  • Xubuntu (ditto)
  • Linux Mint with it's default desktop Cinammon (the display showed "tearing")
  • MATE on Ubuntu (worked but a bit ugly and not quite OOTB as I had to add repo myself. Yeah, I'm lazy, so sue me.)
  • and finally Linux Mint MATE edition (Woohoo!)

You may (quite sanely) question why I was prepared to install all these distros rather than seek a robust solution on Unity. Did it really take less effort? Well, yes, because it was a very low stress approach. I know from installing Ubuntu on other systems that (A) it's pretty quick and (B) it's very simple. I couldn't know how long I might trawl the internet for a different solution. Also, most of the install is unattended so it meant I could do other things in parallel. Like I said, very low stress.

As you can guess from the list, the MATE edition of Linux Mint is now the OS on both my laptops. It plays nicely with both my screen setups, and MATE is so similar to Gnome 2 from which it was forked that my knowledge transfers seamlessly. No relearning. I love it.

http://yakhairsurplus.com/ubunutu-xubuntu-mint-1204-lts-and-then-mate/5d60f6f1-db98-4c9a-b3d2-87fc0b012a18Tue, 03 Sep 2013 11:00:00 GMT
<![CDATA[Post content in current Organic group]]>This week, I started using the Organic groups Drupal module for the first time in Drupal 7. This article is about how to provide a link to post in the currently viewed Organic group. The link could be added in one of several ways, e.g. in a context-sensitive block, but I prefer to add another tab along with View and Edit for the group. This is how I did it.

To start off with I added a menu item for the tab. (I often wish that menu-items and their paths were distinct objects in the Drupal API. I find that the current system can make discussing menu items ... uh, paths ... a little awkward.) Here's the implementation of hook_menu:

function my_module_menu() {

    $items['node/%node/post'] = array(

      'type' => MENU_LOCAL_TASK,
      'title' => 'Post',
      'description' => 'Post a my_content_type for this group.',
      'access callback' => '_my_module_access_post_page',
      'access arguments' => array(1),
      'page callback' => '_my_module_post_page',
      'page arguments' => array(1),

    return $items;

Obviously, I don't want this link to appear on every node, so I need to restrict it to nodes of my_group_type. Additionally, I only want group members to see the link. Here's the access callback:

 function _my_module_access_post_page( $node ) {
    return ($node->type == 'my_group_type') && og_is_member( 'node', $node->nid );


I could have done a one-line page callback in order to redirect to /node/add/my_content_type. I chose not to do this because I wanted to customise the form and I find it easy to use the current path to control this. Instead, I've "embedded" the node form. Here's the page callback:

function _my_module_post_page( $node ) {
    module_load_include( 'inc', 'node', 'node.pages' );
    $form = node_add( 'my_content_type');
    return $form;

Now comes the node form customisation. I needed to cope with several circumstances:

  1. When created through a group, set the group audience field automatically and hide it from the user.
  2. When edited by an admin, allow the group audience to be edited, and
  3. when edited by anyone else, do not allow the group audience to be edited.

This way, most users will never manually set the group audience of a piece of content. That's one less concept that they have to grok in order to use the site. Here's the Form customisation:

 * Implements hook_form_FORM_ID_alter()
 * In the my_content_type node form, strictly control group choice.
function my_module_form_my_content_type_node_form_alter( &$form, &$form_state  ) {
    if ( arg(0) == 'node' && is_numeric( arg(1) ) && arg(2) == 'post' ) {
        // Posting a my_content_type from within a specific group.
        // Automatically select the relevant group audience.
        $form['#validate'][] = '_my_module_force_group';
        $form['og_group_ref']['#access'] = FALSE;
        $form['preset-group'] = array(
            '#type' => 'value',
            '#value' => arg(1),
    elseif ( arg(0) == 'node' && is_numeric( arg(1) ) && arg(2) == 'edit' ) {
        // Editing an existing my_content_type.
        // Only allow powerful admins to change group audience.
        if ( ! user_access( 'Bypass content access control' ) ) {
            $form['og_group_ref']['#access'] = FALSE;

Finally, as you may see in the form customisation code, I've added a custom validation function to the form. Technically speaking it would more properly be a submit function as I'm not validating anything. Instead, I'm setting the group audience from the group node ID which I saved during the form customisation. By making it a validation function I did not need to think about whether the new node had already been saved when my function ran. I assume that those clever Drupal folks have already dealt with this but, as I'm not saving anything myself, I figured I'd save time by not finding out. Here's the custom validation function:

function _my_module_force_group($form, &$form_state) {
    $form_state['values']['og_group_ref']['und'][0]['target_id'] =
    return TRUE;

And that's it. Group members can now post within their groups with minimum hassle.

http://yakhairsurplus.com/post-content-current-organic-group/1d3b990a-9626-42db-ad96-cdb75e9e0c03Fri, 28 Jun 2013 11:00:00 GMT
<![CDATA[Getting Shotgun to listen on multiple domains]]>Just a very quick one this week.

I wrote a 301 redirect inside my Sinatra app in order to send visitors from the www subdomain to the root domain, but how could I test this locally? I'm sure there are lots of ways to do this but I had a go at doing it using Shotgun as that's what I use in local development.

In my hosts file I have lots of domains directed at as I work with lots of Apache virtual hosts. I added another one for www.localhost . I then navigated to www.localhost in my browser but got a message saying that a connection to the server could not be made.

I thought this should have worked since the default host for shotgun to listen on is (like all those entries in the hosts file) but, apparently, that's not how it works. Instead I started shotgun listening on like so

shotgun -o

and then it did work. My app received the request and redirected the browser from www.localhost to localhost.

Addendum: I eliminated some complication by running shotgun on port 80 so that the URLs did not require an explicit port. I did this by shutting down Apache and then running Shotgun through rvmsudo.

http://yakhairsurplus.com/getting-shotgun-listen-multiple-domains/1eec0c2a-0915-45ce-ba6a-16566e9c1f9eThu, 06 Jun 2013 11:00:00 GMT
<![CDATA[GUI versus PUI]]>Although having failed to stick to my one week old first-thing-Friday-morning blogging routine, I have managed to sit down and start this. If you're reading this on Friday evening (and what else would anyone want to do?) then I'm probably patting myself on the back.

Today I started implementing a new Drupal site. I have some frustrations with Drupal ("Here have a poorly documented associative array. No, really, take one. There's plenty more where that came from.") but I'd be a fool if I denied that it's a very fast way of getting content managed sites done. The main aim at this stage was to get the basic elements together so that the designer could easily see the workflow in action and start thinking about how he could make it look pretty.

So I install some modules. They don't immediately solve my problem. That's not surprising. Module authors are not mind readers. So I try configuring the modules to do what I want. Soon I find I'm wading through admin screens with help from Google. Sometimes I'm really not sure what admin screen I should be looking at. Then it dawns on me that I'm trying to program through a GUI (graphical user interface) and whenever I have that realisation I feel a bit nauseous. I feel disconnected from what's happening in my website and have no clear idea of how long it will take me to resolve the problem.

After a couple of hours I stopped. I couldn't help mentally coding what I'd need in order to accomplish the same thing in Ruby and Sinatra. I've done a couple of webapps in Sinatra recently and it was a joy to use. I felt like a programmer. Mainly I felt like a programmer because I was in charge of the data. In the Ruby world, the majority of gems (the conventional package type, as modules are in Drupal) won't inject markup in to your web page and won't touch a database. Gems are basically tools for input, output, and transformation of, data. By using them, you are using what I now think of as a PUI (programmer's user interface.) Or if you insist on being conventional about it, an API (application programming interface.)

I guess what it comes down to is that I prefer programming to configuring an existing, albeit highly modular and extensible, application. I think in future I'll look for more work that lets me stick with the PUI.

http://yakhairsurplus.com/gui-versus-pui/fee5cc41-e3f7-44da-8b2c-50f89e4d7be0Fri, 24 May 2013 11:00:00 GMT