Toxic Elephant

Don't bury it in your back yard!

No-one understands SemVer

Posted by matijs 25/07/2018 at 06h59

I started reading this, and came upon this line:

Many people claim to know how SemVer works, but have never read the specification.

And I thought: Yes! This is exactly the problem. Everyone talks about SemVer, but no-one reads the specification, so the discussions don’t make sense. Finally, someone is going to Make Things Clear!

And then I read this:

Note: Stop trying to justify your refactoring with the “public but internal” argument. If the language spec says it’s public, it’s public. Your intentions have nothing to do with it.

What!? This person complains about people not reading the specifications, and then proceeds to contradict the very first article of the SemVer specification? Here it is (highlight mine):

Software using Semantic Versioning MUST declare a public API. This API could be declared in the code itself or exist strictly in documentation. However it is done, it should be precise and comprehensive.

Whether the language spec says it’s public has little to do with it.

Now, there’s a discussion going on on Hacker News about this article, and clearly I’m not the only one bothered by the quote above, but the commenters are focused on whether languages allow you to control what part of your API is exposed, rather than what the SemVer spec actually says.

No-one understands SemVer.

Tags , , no comments no trackbacks

Importing GTG tasks into Taskwarrior

Posted by matijs 06/06/2018 at 12h47

I used to use Getting Things Gnome (GTG) to keep my TODO list. However, the project seems dead right in the middle of its Gtk+ 3.0 port, so I’ve been looking around for an alternative. After much consideration, I decided on Taskwarrior. I wanted to keep my old tasks and couldn’t find a nice way to export them from GTG, let alone import them into Taskwarrior. So in the end I decided to create my own exporter.

Getting Things Gnome keeps your tasks in some simple XML files in a known location. HappyMapper is ideal for this. I started out using its automatic mapping, but as my understanding of the GTG format deepened, I switched to explicit mapping of a Task’s attributes and elements.

On the other side, Taskwarrior can import simple JSON files that are super easy to create using JSON from the standard library. The script below will output this format to STDOUT. It’s up to you to use task import to process it further.

I implemented this as a spike, so there are no tests, but I like to think the design I ended up with is quite testable. I get annoyed whenever code becomes cluttered, or top-level instance variables start to appear. So I tend to quickly split off classes that have a distinct responsibility. I may yet convert this to a real gem and see how easy it is to bring everything under test.

Finally, before showing the code, I should warn you that it’s probably a good idea to back up your existing Taskwarrior data before playing with this.

Here’s the code:

#!/usr/bin/env ruby

require 'happymapper'
require 'json'

class Task
  include HappyMapper

  attribute :id, String
  attribute :status, String
  attribute :tags, String
  attribute :uuid, String

  element :title, String
  element :startdate, String
  element :duedate, String
  element :modified, DateTime
  element :donedate, String
  has_many :subtasks, String, tag: 'subtask'
  element :content, String
end

class TaskList
  def initialize(tasks)
    @tasks = tasks

    @tasks_hash = {}
    @tasks.each do |task|
      @tasks_hash[task.id] = task
    end
  end

  def each_task(&block)
    @tasks.each &block
  end

  def find(task_id)
    @tasks_hash[task_id]
  end

  def root_task(task)
    parent = @tasks.find { |it| it.subtasks.include? task.id }
    parent && root_task(parent) || task
  end
end

class TaskProcessor
  def initialize(task_list, handler)
    @task_list = task_list
    @handler = handler
    @processed = {}
  end

  def process
    @processed.clear
    @task_list.each_task do |task|
      next if @processed[task.id]
      root = @task_list.root_task(task)
      process_task root
    end

    @task_list.each_task do |task|
      raise "Task #{task.id} not processed" unless @processed[task.id]
    end
  end

  def self.process(tasks, handler)
    new(tasks, handler).process
  end

  private

  def process_task(task, level = 0)
    @handler.handle(task, level)
    @processed[task.id] = true
    process_subtasks task.subtasks, level + 1
  end

  def process_subtasks(subtask_ids, level)
    subtask_ids.each do |task_id|
      raise "Task #{task_id} already processed" if @processed[task_id]
      task = @task_list.find(task_id)
      process_task task, level
    end
  end
end

class TaskWarriorExporter
  def initialize(task_list)
    @task_list = task_list
  end

  def handle(task, level)
    status = case task.status
             when 'Dismiss'
               'deleted'
             when 'Done'
               'completed'
             when 'Active'
               'pending'
             else
               raise "Unknown: #{task.status}"
             end

    data = {
      description: task.title,
      status: status,
      uuid: task.uuid,
    }
    if task.duedate
      if task.duedate == 'soon'
        data[:priority] = 'H'
      else
        data[:due] = task.duedate
      end
    end
    data[:end] = task.donedate if task.donedate
    data[:scheduled] = task.startdate if task.startdate

    entry = guess_entry(task)
    data[:entry] = entry

    subtask_uuids = task.subtasks.map do |subtask_id|
      @task_list.find(subtask_id).uuid
    end
    if subtask_uuids.any?
      data[:depends] = subtask_uuids.join(',')
    end
    data[:tags] = task.tags unless task.tags.empty?
    if task.content
      data[:annotations] = [ { entry: entry, description: task.content } ]
    end
    puts data.to_json
  end

  private

  def guess_entry(task)
    dates = [task.duedate, task.donedate, task.startdate].compact.
      reject { |it| %w(someday soon).include? it }.
      sort
    dates.first || task.modified.to_s
  end
end

projects_file = File.expand_path '~/.local/share/gtg/projects.xml'
projects = HappyMapper.parse File.read projects_file
tasks_file = projects.backend.path
tasks = Task.parse File.read tasks_file
task_list = TaskList.new tasks

TaskProcessor.process(task_list, TaskWarriorExporter.new(task_list))

Tags , , , , 1 comment no trackbacks

Current thoughts on smart contracts

Posted by matijs 31/07/2017 at 13h50

  • Writing a contract such that the law is powerless to reverse it is anti-democratic. Libertarians will probably love it, but in canceling out the ‘oppressive’ state it also cancels any protections offered by the state.
  • Trust is a fundamental basis of human interaction. Creating a trustless way of cooperating allows agents to not be held accountable for actions performed outside the contract.
  • Instead of the lame excuse ‘the law allows me to be an asshole’, we’ll get ‘the smart contract allows me to be an asshole’.

no comments no trackbacks

Private Toolbox: An Anti-Pattern

Posted by matijs 10/04/2016 at 09h21

This is an anti-pattern that has bitten me several times.

Suppose you have an object hierarchy, with a superclass Animal, and several subclasses, Worm, Snake, Dog, Centipede. The superclass defines the abstract concept move, which is realized in the subclasses in different ways, i.e., by slithering or walking. Suppose that due to other considerations, it makes no sense to derive Worm and Snake from a SlitheringAnimal, nor Dog and Centipede from a WalkingAnimal. Yet, the implementation of Worm#move and Snake#move have a lot in common, as do Dog#move and Centipede#move.

One way to solve this is to provide methods walk and slither in the superclass that can be used by the subclasses that need them. Because it makes no sense for all animals be able to walk and slither, these methods would need to be accessible only to subclasses (e.g., private in Ruby).

Thus, the superclass provides a toolbox of methods that can only be used by its subclasses to mix and match as they see fit: a Private Toolbox.

This may seem an attractive course of action, but in my experience, this becomes a terrible mess in practice.

Let’s examine what is wrong with this in more detail. I see four concrete problems:

  • It is not always clear at the point of method definition what a method’s purpose is.
  • Each subclass carries with it the baggage of extra private methods that neither it nor its subclasses actually use.
  • The superclass’ interface is effectively extended to its non-public methods,
  • New subclasses may need to share methods that are not available in the superclass.

The Animal superclass shouldn’t be responsible for the ability to slither and to move. If we need more modes, we may not always be able to add them to the superclass.

We could extract the modes of movement into separate helper classes, but in Ruby, it is more natural to create a module. Thus, there would be modules Walker and Slitherer, each included by the relevant subclasses of Animal. These modules could either define move directly, or define walk and slither. Because the methods added in the latter case would actually makes sense for the including classes, there is less need to make them private: Once could make a instance of Dog walk, either by calling move, or by calling walk directly.

This solves all four of Private Toolbox’ problems:

  • The module names reveal the purpose of the defined methods.
  • Subclasses that do not need a particular module’s methods do not include it.
  • The implementor of Animal is free to change its private methods.
  • If a new mode of transportation is needed, no changes to Animal are needed. Instead, a new module can be created that provides the relevant functionality.

Tags , , no comments no trackbacks

Minimally Intrusive SimpleCov Loading

Posted by matijs 02/04/2016 at 16h55

I always like extra developer tooling to be minimally intrusive, to avoid forcing it on others working with the same code. There are several aspects to this: Presence of extra gems in the bundle, presence and visibility of extra files in the repository, and presence of extra code in the project.

For this reason, I’ve been reluctant to introduce tools like guard or some of the Rails preloaders that came before Spring. On the other hand, no-one would be bothered by my occasional running of RuboCop, Reek or pronto.

In this light, I’ve always found SimpleCov a little too intrusive: It needs to be part of the bundle, and the normal way to set things up makes it rather prominently visible in your test or spec helper. Nothing too terrible, but I’d like to just come to a project, run something like simplecov rake spec, and have my coverage data.

I haven’t reached that blissful state of casual SimpleCov use yet, but I’m quite pleased with what we achieved for Reek.

Here’s what we did:

  • Add simplecov to the Gemfile
  • Add a .simplecov file with configuration:
    SimpleCov.start do
      track_files 'lib/**/*.rb'
      # version.rb is loaded too early to test
      add_filter 'lib/reek/version.rb'
    end

    SimpleCov.at_exit do
      SimpleCov.result.format!
      SimpleCov.minimum_coverage 98.9
      SimpleCov.minimum_coverage_by_file 81.4
    end
  • Add -rsimplecov to the ruby_opts for our spec task:
    RSpec::Core::RakeTask.new('spec') do |t|
      t.pattern = 'spec/reek/**/*_spec.rb'
      t.ruby_opts = ['-rsimplecov -Ilib -w']
    end

This has several nice features:

First, there are no changes to spec_helper.rb. That file can get pretty cluttered, so the less has to be in there, the better.

Second, it only calculates coverage when running the full suite with rake spec. This means running just one spec file while developing won’t clobber your coverage data, and it makes running single specs a little faster since it doesn’t need to update the coverage reports.

Third, it enforces a minimum coverage per file and for the whole suite. The second point helps a lot in making this practical: Otherwise, running individual specs would almost always fail due to low coverage.

no comments no trackbacks

Repo size

Posted by matijs 25/09/2015 at 08h11

I just realized one important factor for attracting casual open source contributions is code/repo size. A huge repo is a barrier. So, it’s hugely important to either use off-the-shelf libraries, or split off parts of your code into their own components. These components need to live in their own repository, so no monorepo’s.

Of course, a high-status, high-visibility project can get away with more. Rails, for example, has all its components in one repository and does not seem to be lacking in contributions. On the other hand, for a long time Gnome required the full source for everything to be checked out and built together. This requires a serious commitment for even the most trivial bug fixes.

Why the sudden insight? A project I’m involved in has problems with wkhtmltopdf: The version that used to work crashes after a server upgrade, and the version that works has problems with fonts and images. A simple solution could be to just recompile the old version on the new server. However, because it essentially forks all of Qt, checking out the source will require 1GB of disk space, while building will require another 2.5GB (and a commensurate amount of time). This is not undertaken lightly.

no comments no trackbacks

Try to avoid try

Posted by matijs 28/07/2015 at 10h52

Because of a pull request I was working on, I had cause to benchmark activesupport’s #try. Here’s the code:

require 'benchmark'
require 'active_support/core_ext/object/try'

class Bar
  def foo

  end
end

class Foo

end

bar = Bar.new
foo = Foo.new

n = 1000000
Benchmark.bmbm(15) do |x|
  x.report('straight') { n.times { bar.foo } }
  x.report('try - success') { n.times { bar.try(:foo) } }
  x.report('try - failure') { n.times { foo.try(:foo) } }
  x.report('try on nil') { n.times { nil.try(:foo) } }
end

Here is a sample run:

Rehearsal ---------------------------------------------------
straight          0.150000   0.000000   0.150000 (  0.147271)
try - success     0.760000   0.000000   0.760000 (  0.762529)
try - failure     0.410000   0.000000   0.410000 (  0.413914)
try on nil        0.210000   0.000000   0.210000 (  0.207706)
------------------------------------------ total: 1.530000sec

                      user     system      total        real
straight          0.140000   0.000000   0.140000 (  0.143235)
try - success     0.740000   0.000000   0.740000 (  0.742058)
try - failure     0.380000   0.000000   0.380000 (  0.379819)
try on nil        0.210000   0.000000   0.210000 (  0.207489)

Obviously, calling the method directly is much faster. I often see #try used defensively, without any reason warrented by the logic of the application. This makes the code harder to follow, and now this benchmark shows that this kind of cargo-culting can actually harm performance of the application in the long run.

Some more odd things stand out:

  • Succesful #try is slower than failed try plus a straight call. This is because #try actually does some checks and then calls #try! which does one of the checks all over again.
  • Calling #try on nil is slower than calling a nearly identical empty method on foo. I don’t really have an explanation for this, but it may have something to do with the fact that nil is a special built-in class that may have different logic for method lookup.

Bottom line: #try is pretty slow because it needs to do a lot of checking before actually calling the tried method. Try to avoid it if possible.

Tags , , no comments no trackbacks

In Ruby, negation is a method

Posted by matijs 30/01/2014 at 06h16

These past few days, I’ve been busy updating RipperRubyParser to make it compatible with RubyParser 3. This morning, I discovered that one thing that was changed from RubyParser 2 is the parsing of negations.

Before, !foo was parsed like this:

s(:not, s(:call, nil, :foo))

Now, !foo is parsed like this:

s(:call, s(:call, nil, :foo), :!)

That looks a lot like a method call. Could it be that in fact, it is a method call? Let’s see.

Tags , no comments no trackbacks

Things: A classification

Posted by matijs 19/01/2014 at 11h33

  • Things needed every day
  • Things needed every week
  • Things needed only during a certain season
  • Things needed for administrative purposes
  • Things kept for sentimental reasons
  • Thinks kept for beauty

Tags no comments no trackbacks

Some thoughts on Ruby's speed

Posted by matijs 02/03/2013 at 16h42

Yesterday, I read Alex Gaynor’s slides on dynamic language speed. It’s an interesting argument, but I’m not totally convinced.

At a high level, the argument is as follows, it seems:

  • For a comparable algorithm, Ruby et al. do much more work behind the scenes than ‘fast’ languages such as C.
  • In particular, they do a lot of memory allocation.
  • Therefore, we should add tools to those languages that allow us to do memory allocation more efficiently.

Tags , , , no comments no trackbacks