The best kittens, technology, and video games blog in the world.

Monday, October 31, 2016

Breadboard computer - experimenting with switch board


I got my first delivery of parts, and I wanted to try testing if breadboard is a viable alternative to soldering, starting with switch board. The idea is very simple:

  • power lines on top of the breadboard are connected to +5V
  • power lines on bottom of the beadboard are connected to GND
  • there's direct connection from GND to bottom of every switch, using 8-wire cables
  • there's individual 8-wire cable with output from every line
  • there are individual resistors (I used 220 Ohm) connecting top of every switch with power line
Here's a quick electric diagram of each of eventually 7x8 output lines:
I can already see weakness of this design - I have 56 resistors to connect (7 switches of 8 each), but there's only 50 lines (10 groups of 5 each), which forces connecting them to lines under each other. As resistors have long leads, they end up touching each other, so testing shows that that this is already not working properly.

Possible solutions:

  • get resistors with common terminal (like 8 resistors with common terminal), so that I'd only need 7 connections to power lines, not 56 – that was actually my original idea, and what I used in my original (soldered) design, but I strangely couldn't find any on amazon or other sites
  • cut resistor leads to smaller length – might still not be enough
  • use second board for just resistors, connect them like DIP elements
  • use fewer switches per breadboard to reduce crowding - it will still be awkward due to inconvenient 5-grouping on power lines, but presumably if I only had 5 switches, I could use two 5-groups per 8-switch
There's one more minor issue - on these breadboards power lines are broken in half, so if I connect them I'd only have 93 out of 100 connection points (1 to get power in, 3x2 to connect the lines) and some extra awkward cabling.


The plan right now is to have a working switch board (doesn't necessarily have to be 56-wide, 40-wide would be fine), and a diode-board. These were invaluable debugging tools during my first attempt.

Friday, October 28, 2016

Review of scryfall - mtg.wtf competitor

♥  My Girl ♥ by Trish Hamme from flickr (CC-BY)

After magiccards.info got borderline abandoned, I wrote mtg.wtf as a much better search engine for Magic cards.

There have been a few other attempts by other people, but they generally had extremely simple search functionality, so nothing worth writing about. A few days ago a new site came out - scryfall - which is at least seriously trying to support MCI-style syntax, so let's take a look.

Basic design choices

Let's compare MCI, MW, and SF:
  • All search engines treat every printing and every card part as a separate object
  • SF follows MW, and avoids polluting search results with non-English cards
  • SF unfortunately follows MCI and pollutes o: search with remainder text
  • SF includes nonstandard cards (like Planes) in all searches. MCI completely skips them. MW only searches for them if requested explicitly (by t:* or by specific search like t:plane).
  • SF decided to skip un-cards. MCI has them, and MW not only has them, it even has a lot of special queries (like for fractional cmc) which only makes sense for un-cards.
  • I have no idea why the hell SF included this. Is that some joke, or a phantom card?
I feel fairly ambiguous about including nonstandard cards by default, and perhaps SF's simple approach is better. As for everything else, they'd probably be better off matching MW.

Search engine issues

I tried to run searches from their syntax example page and compare them with what mtg.wtf returns.

Their results had a lot of bugs, such as:
  • wrong cmc for all meld cards
  • wrong cmc for some DFC cards
  • many cards wrongly tagged as is:commander, like Brisela, Voice of Nightmares, Kenzo the Hardhearted, or Ormendahl, Profane Prince.
They also made some really weird design choices, which I'd honestly just call bugs, as all of them make searching less useful:
  • color identity search being >=, while what you want pretty much every time is <= - you're generally filtering for cards fitting specific commander, not the other way around
  • mana search being >= on symbols, so if you search m:4 it matches {4}{G} but not {5}. That's weird as hell, as the natural search would be equality, and failing that at least treating {4} as {1}{1}{1}{1}.
  • Weirdly they decided that cards with changeling should be returned for searches for all creature/tribal types. So if you're searching for monkeys, you get 1 monkey and 30 changelings.
  • cards like conspiracies are listed as "banned" instead of simply excluded from all formats
Some of their choices are probably fine either way, but MW generally follows MCI in such borderline cases:
  • is:timeshifted changed its meaning from MCI, and SF is:colorshifted does what MCI is:timeshifted would.
  • color search being "and" instead of "or" like MCI/MW. So c:rg searches for cards which are red and green, as well as any additional colors. This is easily achieved on MCI/MW with search for c:r c:g.
  • they treat 2015 frame update as different frame type, which MCI/MW doesn't
SF doesn't have any of MW's extensions. First such limitation I ran into is that there's no way to search for split cards like Fire // Ice or t:creature // t:planeswalker.

They added a bunch of search aliases like color:red for c:r - I looked through the list, and I added some of them to MW as well, because there's pretty much no cost.

mtg.wtf's codebase is all open source, so I'd recommend them to just take my test suite, and try to make theirs work better.

Extra features

The most ambitious feature they have is that they want to instantly include cards from upcoming sets as they are revealed, not only after set appears on Gatherer. This is sweet, and definitely seems like way too much work for me at mtg.wtf.

They have card price integration, which I still haven't added, as all card pricing information seems to be hidden behind private APIs.

They have online API, which I guess I could add as well, but you can just download the code from github, and run it all locally, without even bothering with the API. I could add API to MW quite easily too, but it would be useful to do some performance testing first, as APIs often end up having order of magnitude more requests than real users.

SF has much better feedback for search query syntax errors than MW. That's definitely a good area of improvement. On the other hand their spelling correction is awful - compare SF with MW.

MW has been rather conservative with website design, and SF is trying something much more ambitious.

Friday, October 14, 2016

Design patterns for parsing binary files in Ruby

Sand Cat by Tanya Durrant from flickr (CC-ND)

A while ago I wrote about how to analyze binary file formats.

Video games have a ton of custom binary file formats, and to mod them I kept writing parsers, which were in a way very similar.

All parsers were very similar in form, but they varied in too many details to make it possible to extract any viable gem from that.

So let's write a parser.

Just read the whole file

Even if file is fairly big, it's probably still far easier to just read it all in one go, and save yourself from the pointless file I/O.

require "pathname"
class FooParser
  def initialize(path)
    @path = Pathname(path)
    @data = @path.open("rb", &:read)
    @ofs = 0
  end
end

We're saving the @path mostly to make exception messages we'll throw more meaningful.

@data is all file's contents - and we read it in binary mode ("rb") to deal with binary vs Unicode and Windows silliness.

@ofs is current offset, as we'll be processing data mostly in linear way.

Some common helper functions

These aren't necessary for all parsers, but it's fairly common to see code like this:

class FooParser
  def bytes_left
    @data.size - @ofs
  end

  def eof?
    @data.size == @ofs
  end

  def fail!(message)
    raise "#{message} at #{@path}:#{@ofs}"
  end
end

Such functions aren't really doing parsing work, they're mostly there for debugging and for sanity checks.

Get some data

Now let's get some data:

class FooParser
  def get(n)
    fail! "Trying to read past end of file" if bytes_left < n
    result = @data[@ofs, n]
    @ofs += n
    result
  end
end

Most files are byte-oriented, and going through a helper function to get the next few bytes and manage offset and end of file will save us complexity elsewhere.

This kind of function is also a great place for cases where file format is unusual somehow. Vast majority of formats are byte-oriented and fit in memory just fine, but if your format in too big, or uses individual bits, or does anything else weird, you can generally encapsulate such logic in get method - as long as @ofs uniquely describes current location in the file.

Get integers

Most formats are full of numbers and strings, and it's useful to write get methods for all of them. Let's start with Integers as they're easiest.

I generally use concise byte count convention (so get_i4 means get signed integer with 4 bytes), but more verbose bit-based naming (like get_i32 or even get_int32) might be more to your liking.

For most formats there's standard #unpack for them, so their unpackers are very simple - usually the only difference is right unpack string based on big vs small endianness of your format:

class FooParser
  def get_u1
    get(1).unpack("C")[0]
  end

  def get_u2
    get(2).unpack("v")[0]
  end

  def get_u4
    get(4).unpack("V")[0]
  end

  def get_i4
    get(4).unpack("l")[0]
  end
end

Very rarely file format uses more unusual number like 3-byte or variable length integer. Their get_* methods will look different from format to format, but you should generally be able to encapsulate them.

Here's an examples - for 24-bit numbers we can just pad them with extra byte (0 for unsigned, either 0 or 255 for signed), and use proper unpack string for 32-bit numbers:

class FooParser
  def get_u3
    ("\x00".b+get(3)).unpack("N")[0]
  end

  def get_i3
    s = get(3)
    prefix = (s[0].ord >= 128 ? "\xFF".b : "\x00".b)
    (prefix+s).unpack("l>")[0]
  end
end

Usually you can either add some padding, or just read them byte at a time and do some math to get the right number.

Get floats

This is slightly more complicated for two reasons - first the most common format in files is single precision, but Ruby uses double precision. Unpacking floats generates questionable extra precision:

[0.1].pack("f").unpack("f")[0] #=> 0.10000000149011612

Especially if you plan to present the result to users, we often don't want that - we want the "nicest looking" double precision number which still decodes to the same single precision number.

Now I'm sure there's some more "precise" algorithm, but this prettification method worked well:
class Float
  def pretty_single
    result = round(6)
    return result if [self].pack("f") == [result].pack("f")
    self
  end
end

What we do here is round it to 6 decimal places, check if it equals self when converted to single precision, and if so, return rounded result, otherwise return self. It works with infinities, NaN, -0 and all other weirdness.

With that it's easy to write single precision float getter:
class FooParser
  def get_flt
    get(4).unpack("f")[0].pretty_single
  end
end

Second problem is that Ruby unpack still lacks half-precision floats, which are extremely popular in GPU world, so any data destined to go to GPU often uses it - or just formats trying to save on space.

In such case, just get_u2 and do bit manipulation yourself.

Get strings

Strings are just as ubiquitous as numbers, but there's a few more formats, including:
  • character count (ofter u2, 1 byte characters), followed by ASCII characters - get(get_u2)
  • character count (ofter u2, 2 byte characters), followed by UTF-16 characters - get(get_u2*2).unpack("v*").pack("U*")
  • NUL-terminated ASCII / UTF-8 - result = ""; result << c while (c = get(1)) != "\x00"; result
  • Constant size ASCII field (here 256 1-byte characters), padded by NULs - get(256).sub(/\x00*\z/, "")
  • Constant size UTF-16 field (here 256 2-byte characters), padded by NULs - get(512).pack("v*").unpack("U*").sub(/\x00*\z/, "")
Just create appropriate get_str method based on whichever encoding your format uses.

It's not uncommon for a single file format to have more than one string encoding - in such case create multiple such methods, or make it accept some argument to switch between them.

Get individual data structures

The generic part of the parser is done, so now for any data structure in your format you need to write appropriate parsing method, returning some Hash, Array, or proper object depending on the structure:

class FooParser
  def get_vector3d
    x = get_flt
    y = get_flt
    z = get_flt
    {x: x, y: y, z: z}
  end
end

For more concise code you can often shortcut this thank to ruby evaluating arguments left to right (like most languages nowadays, but there were dark times when undefined or backwards evaluation conventions existed too):

class FooParser
  def get_vector3d
    {x: get_flt, y: get_flt, z: get_flt}
  end
end

Complete the parser

And the last part is completing the parser. If the whole file consists of same data structures every time, just write some kind of get_everything following the same pattern and you're done.

Even then I'd recommend adding something like fail! "Parsing done, but #{bytes_left} bytes left" unless eof? sanity check just to be sure format is what we expected it to be - extra garbage bytes generally indicate that file is not quite in formats expected.

Of course things are rarely quite as simple. It's very common for file formats to have structure like:
  • header starting with some magic
  • header then includes some global data and counts for data structures to follow
  • after that individual data structures follow, based on data in the header

Example parser

To avoid complexity of real formats, let's try a fictional format with kittens and puppies:
  • magic marker "CUTE"
  • u4 number of kittens
  • u4 number of puppies
  • each kitten has star rating (float), number of lives left (u4) and photo url (str)
  • each puppy has star rating (float) and photo url (str)
class FooParser
  def get_kitten
    { rating: get_flt, lives_left: get_u4, url: get_str}
  end

  def get_puppy
    { rating: get_flt, url: get_str}
  end
  def parse!     fail! "Wrong magic number" unless get(4) == "CUTE"     kitten_count = get_u4     puppy_count = get_u4     result = {       kittens: kitten_count.times.map{ get_kitten },       puppies: puppy_count.times.map{ get_puppy },     }     fail! "Parsing done, but #{bytes_left} bytes left" unless eof?     result   end end

Offset-based formats

OK, that's one easy file format, but what if instead of data structures following each other, header contains a list of offsets instead, and data structures are at positions indexed.

That's going to take some jumping around, so let's write a helper for that:

class FooParser
  def at_ofs(ofs)
    saved_ofs, @ofs = @ofs, ofs
    yield
  ensure
    @ofs = saved_ofs
  end
end

Then it's fairly straightforward. In formats like that you usually won't end at end of file when you finish parsing, so that step is not necessary. Let's say our format is:
  • magic marker "CUTE"
  • u4 number of kittens
  • that many times u4 offset to kitten data structure (relative to beginning of file)
  • (rest of the file, presumably containing kitten data)

class FooParser
  def parse!
    fail! "Wrong magic number" unless get(4) == "CUTE"
    kitten_count = get_u4
    kittens = kitten_count.times.map do
      at_offset(get_u4) { get_kitten }
    end
    {kittens: kittens}
  end
end

There are weirder data structures, like offsets relative to current location etc. You can generally keep all the offset mangling in helper functions and keep your parsing code high level.

Lookahead

This is more a debugging thing than something you'll need to do often in actual parsing, but sometimes in your pry session you'd like to see if what's following is a kitten without affecting current parser state. It's very easy:

class FooParser
  def lookahead
    saved_ofs = @ofs
    yield
  ensure
    @ofs = saved_ofs
  end
end

Then in your pry session just try lookahead{ get_kitten } and it will give you a kitten (or throw an exception).

Limitations

Writing Ruby-based parsers following patterns described in the post have been far more convenient and flexible than anything else I tried, but there are limitations:
  • you need to have some clue about the format, or at least data structures at its beginning - there's some debugging helpers, but no autodetection, and the only "partial parsing" you can conveniently do is for beginning of the file - if you know some data structures in the middle of the file, it will still be hard to find them and partially decode them without knowing the rest of the format
  • it's inherently one-way - you'll need to write a separate encoder, even though some clever DSL might have given you 90% of it for free. I found that overhead of highly abstracting DSLs is generally much higher than just writing it twice, and it's relatively easy to test that they match by round trip testing - just grab some samples, and check if decoding followed by encoding gets you the sample back.
  • its performance is OK, but not amazing - we're working with very high level language, and doing byte manipulation with a lot of layers of abstraction above it - it's actually quite surprising how fast it works, but it's still nowhere near as fast as a parser written in very low level language
  • it's designed for full parsing in one go, not random access - this is mostly performance issue, but if you want to just read (let alone change) a single kitten in file with thousands, this pattern won't support you
  • if the format is crazy complex, you'll probably need a lot more sophisticated parser - this pattern is great for simple to moderately complex formats

Wednesday, October 12, 2016

How I made mtg.wtf fast

red kitten 01 by mathias-erhart from flickr (CC-SA)
Website speed was one of the most common complaints about mtg.wtf, so I decided to do something about it.

It won't be necessary, but if you want more background, you can read about architecture of mtg.wtf in my previous post.

Verify the problem

The site seemed "fast enough" to me, but multiple users complained about its performance, so presumably there was something real about it.

I still wanted to make sure. It's important to verify that the problem you're trying to fix is real and on your side, as to not waste time fixing problems which might be caused by someone's crappy mobile connection or ISP throttling.

I checked Rails logs, and indeed - timing I've seen there were pretty poor. There wasn't even much need for statistical analysis - very simple queries shouldn't routinely took ~1000ms.

Have a test suite

Before I started any performance work, I needed to have a test suite to validate that I'm not breaking anything. Fortunately mtg.wtf was as TDD as it gets, and had very comprehensive tests - even if I'm retrospectively somewhat regretting using minitest for that.

If you don't have good test suite, don't despair - for optimizations you just need to find a reasonably representative sample of inputs (which you can use logs for), and record current version's outputs for them. If something broke in process, you'll get different results, so you'll instantly know.

It's not as good as dedicated test suite, as your random sample probably lacks important edge cases a proper test suite would generally have, but you get 80% of value for 20% of effort, and you can add a few test for edge cases as you go.

Generate a benchmark

This was easy enough - I downloaded logs, extracted list of queries from them, took a random sample, and tested time per request.

The best thing about this set was that I could very easily generate new benchmarks for specific types of queries with just grep - so when optimizing let's say card type searches, I can look for t:, which will give a much more accurate result than just throwing everything at it. (assuming my optimizations don't affect other query types of course)

As secondary benchmark I also recorded time to run the test suite. I didn't expect it to be super sensitive measurement, but it's nice to have extra validation for zero effort, and I was going to run the tests a lot anyway.

A small complication for any kind of benchmarking is that you want your computer to not be overly busy, or it will contaminate the results. It doesn't need to be completely idle, as computers have multiple cores and you'll generally be using only one core, but I closed things like Chrome and used my other computer for sucht things for a while.

Profiler

This is optional, and profiling tools in ruby are fairly weak and awkward, but I ran ruby-prof anyway to get some idea about code hot spots.

It would identify which parts of the code were taking the most time, but it's mostly there to generate some suggestions.

Precompute data

Shockingly the biggest hot spot was very simple query ConditionExact, which corresponds to querying for exact card - with queries like !goblin guide or !Far // Away.

The query was actually very slow, as it was going through entire database, and doing something like this:
  • go through the whole database, every card printing
  • take query, downcase, fix unicode, fix spacing
  • take card.name, downcase, fix unicode, fix spacing
  • if they're the same, it's a match
That was really slow, and also quite silly. Card names don't have spacing problems (like double spaces between words), and we're already fixing unicode silliness like the "Æ" ligature when we load the database. As for the query, we can precompute this data when we parse it and create Condition object.

It still needed the card name downcase step as card name was something like Goblin Guide, but it needed to match goblin guide, GOBLIN GUIDE and whichever other capitalization people use.

So the algorithm became:
  • go through the whole database, every card printing
  • take preprocessed query
  • take card.name, downcase
  • if they're the same, it's a match
That's still fairly inefficient, but it skips huge amount of pointless string manipulation.

Similar preprocessing was later applied to a lot of other conditions.

Separate common and uncommon cases

Going back to ConditionExact, it was actually two different queries:
  • exact card name - like !Goblin Guide
  • multipart card name - like !Far // Away
The multipart case was much more complex, as search engine represents every part of a multipart card as a separate object, so it needed somewhat complex logic to handle this.

So I moved all rare and complex code to ConditionExactMultipart and I just had the commond and simpler case left to optimize.

In a few other cases I set a flag if certain complex processing was needed. For example we support queries like o:"when ~ enters the battlefield", where "~" matches name of the card - so for example it matches Zealous Conscripts as it contains text "When Zealous Conscripts enters the battlefield, ...".

Such queries require more complex logic than most o: (card text) queries, which just look for same text every time, and can do more preprocessing.

In this case we set a flag in ConditionOracle constructor and then follow different logic based on that flag.

This optimization only makes sense when simpler condition is also the common one - if most cases are complex, you're not really gaining much by splitting them.

Use indexes

Most queries use ConditionSimple, which just tests card printings one at a time with match? method.

The alternative is for condition to process whole database and generate Set of results in one go.

These two modes are combined by he most common ConditionAnd to first get all Sets, take their intersection, and then filter them with each ConditionSimple one at a time.

Direct result Set generation was already used for a few queries already - like searching for cards printed in a specific edition, or specific block. I added that to format search as well, as f:standard is reasonably common phrase in queries which benefits a lot from this kind of early culling. Bigger formats like f:modern or f:commander didn't significantly benefit from this optimization, but they didn't seem to suffer either, so I applied it everywhere.

Of course the biggest beneficiary of direct access was ConditionExact which got a downcased card name index, and can now result results almost instantly.

I didn't add any more indexes as ruby is very memory hungry and keeping memory use low is very important. Other queries were either much less common or wouldn't benefit anywhere near as much.

Optimize memory use

I wanted to take a quick go at reducing memory use, unfortunately ruby memory profiling tools are still very poor, so I just did a few intuitive things like removing unnecessarily duplicated data.

I pondered going further, but speed improvement I got for random sample of queries were already huge. I didn't need anything complex like adding caching, getting bigger box, trying JRuby, or moving search engine to external service.

Verify the solution

Even though changes looked really good in benchmark mode, it's important to verify them in production. So I checked the results, and indeed vast majority of requests are fast now.

There's a few which are still somewhat slow - like for example someone trying to get 52nd page of result of all cards in EDH, but it's a reasonably safe bet that most of such weird requests are made by bots.

Overall, it's been a fairly straightforward and successful process.

Monday, October 10, 2016

Architecture of mtg.wtf search engine

There's nothing revolutionary here, it's a reasonably straightforward design. All the code discussed can be seen in its repository.

What are we searching?

We're searching "cards", but that's not specific enough, as there are two big complications:
  • Some cards have multiple printings
  • Some cards have multiple parts
So we need to decide if our searchable object is:
  • one part, one printing
  • one part, all printings
  • all parts, one printing
  • all parts, all printings
Far // Away is an example of multipart card

Vast majority of cards have only one part, and a lot of the time card's properties are the same between printings, so we could get away with other choices, or even inconsistent semantics, but I've chosen to operate on most granular "a single part of a single printing of a card".

Indexer

Raw data comes mostly from mtgjson, but first step is converting into a more usable form, which is actually just more json. I keep both raw and processed json files in repository, mildly annoying github, but in this case it's best practice as tests can't run without that, so code is incomplete without those jsons.

Indexer does some transformations and validations, and groups data by properties which should be same between cards (like name, text, cost, color) and those which can vary (like artist, rarity, flavor text).

When I started the project mtgjson data was of high quality, but it got a lot worse since it got new maintainers, so now I need to do a lot more fixing (and reporting those bugs upstream did nothing).

Every now and then I feel like I should just fork mtgjson and fix all those issues, but it's a pile of javascript and duct tape, so I mostly hope they get their act together again.

Because indexer started as very simple script and accumulated hacks over time (mostly due to deteriorating mtgjson quality), it's the ugliest part of the codebase. It has no dedicated tests, but as the whole test suite is operating on real data, any bug in indexer or mtgjson is likely to be picked by tests down the line.

Loading data

The library loads data from processed json files and transforms them into in-memory representation of Ruby objects. There's no dedicated database or anything like that - we're doing everything directly.

There's currently 16686 cards and 31648 printings, so it's not tiny dataset, but it's not unreasonably huge for this fairly naive approach to work.

Query Parser

We parse any query with StringScanner - which is one of the least known parts of ruby standard library, generating Query object.

Query object contains a tree of Condition objects corresponding to the query. In addition to searching Query object is also responsible for sorting results (by name, most recent etc., with sort: operator) and for time travel.

Yes, time travel - I find it occasionally useful to sometimes search for results as if I was searching at particular date in the past - so cards not printed at that time are not in search results, and format legalities are changed accordingly. This isn't literally going into the past, as we're not trying to do anything like updating Oracle text to old wording (which wouldn't be all that helpful), but that's how you can search what to play when you somehow get to play flashback Innistrad Standard.

time: operator is on query level, so you can't do cute things like asking for cards legal in both Ravnica Standard and Return to Ravnica Standard.

Condition

Condition objects must implement #search(db) method which takes database and returns Set of matching card printings.

This probably seems like a weird interface, and most conditions (those returning true when asked #simple?) also implement secondary interface of #match?(card_printing) returning true or false and avoiding allocations of intermediate Sets.

ConditionAnd, ConditionOr, and ConditionNot use one or the other API depending on type of their subconditions.

The reason for this design is that some subqueries ask about other part of the card, or other printing. For example you can check for cards printed in both M10 (Magic 2010) and M11 (Magic 2011) by e:m10 alt:e:m11. Doing recursive search and checking all subconditions would be exponentially slow in the worst case, and this design is always linear is query size.

Searching for edition (e:) and block (b:) always return small number of results, and we already have to maintain set data, so we reuse it as index and return small result Sets from them. Other queries don't use any indexing and just check cards one by one.

For most conditions either all printings match a query or none do, so special case for conditions which don't deal with anything printing-specific could potentially speed up a lot of queries. On the other hand while some cards have crazy number of printings, average is just 1.9 printings/card, so it's hard to tell if such savings would be worth added complexity.

Of course more indexes and more sophisticated query optimizer would be possible - or even using some dedicated search engine like solr. This didn't seem necessary due to relatively small amount of data. Fortunately the search engine has very extensive tests, so if I ever feel like developing one, it would be reasonably easy to do so safely.

Spelling correction

Many Magic cards have names which are very easy to misspell.
How do I spell this exactly?
QueryParser deal with it - if search returns zero results, it sets fuzzy flag on Conditions and retries the whole search. A few conditions will autocorrect your query, and any such autocorrection are logged and displayed with search results.

This two step process (as well as diacritics stripping and very limited amount of stemming) deals with most misspelled searches with good usability while keeping false positives to very low level.

Results coalescence

At this point we get a sorted list of printings, which is not quite what we want to display - all printings of the same card need to be merged (which is just .group_by(&:name)) - and since we'll be displaying card picture from specific printing we also want to choose the most appropriate one.

The algorithm for that:
  • choose only from matching printings
  • prefer those with card picture (small number of rare promos don't have pictures) - that's a frontend consideration database has no idea about
  • otherwise use order returned by search engine, which is:
    • whichever order was explicitly requested if any
    • name in alphabetical order
    • Standard-legal sets first
    • most recent first
After that we paginate the results.

Future direction for mtg.wtf

The most common requests are:
  • improving website performance
  • adding card pricing information via some third party API
  • advanced search form
Advanced search form is a UI problem, for which I don't have a solution yet, as all advanced search forms are atrocious, leave most possible options out, or both.

Third party API for pricing ended up more complex than I thought, but I'll probably get there at some point.

Performance complaints probably mean that current architecture could use some improvements after all. It seems fast enough to me, but possibly that's not good enough.

Sunday, October 09, 2016

Solving sudoku with ruby and Z3


Solving Sudoku is a bit like FizzBuzz for constraint solvers, but since Z3 is very poorly known, and Z3 for ruby has few users other than me, I want to write a series of posts explaining various Z3 techniques, starting from very simple problems.

Parsing Sudoku

Before we get to Z3, let's read test data. A nice format for sudoku would be a text file like this:

_ 6 _ 5 _ 9 _ 4 _
9 2 _ _ _ _ _ 7 6
_ _ 1 _ _ _ 9 _ _
7 _ _ 6 _ 3 _ _ 9
_ _ _ _ _ _ _ _ _
3 _ _ 4 _ 1 _ _ 7
_ _ 6 _ _ _ 7 _ _
2 4 _ _ _ _ _ 6 5
_ 9 _ 1 _ 8 _ 3 _

Where _s represent empty cells, and numbers represent pre-filled cells.

Fairly straightforward code like this can parse it to 9x9 Array of Arrays:

File.read(path).strip.split("\n").map do |line|
  line.split.map{|c| c == "_" ? nil : c.to_i}
end

Getting us data structure like this:

[[nil, 6, nil, 5, nil, 9, nil, 4, nil],
 [9, 2, nil, nil, nil, nil, nil, 7, 6],
 [nil, nil, 1, nil, nil, nil, 9, nil, nil],
 [7, nil, nil, 6, nil, 3, nil, nil, 9],
 [nil, nil, nil, nil, nil, nil, nil, nil, nil],
 [3, nil, nil, 4, nil, 1, nil, nil, 7],
 [nil, nil, 6, nil, nil, nil, 7, nil, nil],
 [2, 4, nil, nil, nil, nil, nil, 6, 5],
 [nil, 9, nil, 1, nil, 8, nil, 3, nil]]

Z3 workflow

First, we need to get the solver:

solver = Z3::Solver.new

After that we feed it with a bunch of formulas with solver.assert formula.
How would we describe sudoku problem in plain English?
  • There are 9x9 integer variables
  • Each of them is between 1 and 9
  • They correspond to prefilled data, unless that data is nil
  • Each row contains distinct values
  • Each column contains distinct values
  • Each 3x3 square contains distinct values
Once we tell Z3 about that, we check if our formulas are solvable with solver.check == :sat, and if it so, get model with solver.model - I feel those two steps should probably be refactored into one, but let's leave it for now.

Model can be then accessed with model[z3_variable] syntax to get our answers. So let's get to it!

Creating variables

There's nothing special about Z3 variables, they're sort of like ruby Symbols with associated types. So we could build our formulas with names like Z3.Int("cell[4,8]") - such name is just for our personal use, and doesn't mean cells form an array or anything.

However, since we'll be referring to the same 9x9 variables all the time it's probably useful to save them to an Array of Arrays.

cells = (0..8).map do |j|
  (0..8).map do |i|
    Z3.Int("cell[#{i},#{j}]")
  end
end

Setting possible values

All variables are between 1 and 9, which is very easy to tell the solver about:

cells.flatten.each do |v|
  solver.assert v >= 1
  solver.assert v <= 9
end

Setting prefilled variables

Telling solver that Z3 Int variable is equal to some ruby Integer is completely straightforward:

cells.each_with_index do |row, i|
  row.each_with_index do |var, j|
    solver.assert var == data[i][j] if data[i][j]
  end
end

All rows contain distinct values

If we relied on basic operations, we might have to give Z3 a lot of inequalities per row, fortunately there's Z3.Distinct(a,b,c,...) formula, which handles this very common case.

It makes it really easy:

cells.each do |row|
  solver.assert Z3.Distinct(*row)
end

All columns contain distinct values

We can use Array#transpose to flip our multidimensional array, and do the same thing we did for rows:

cells.transpose.each do |column|
  solver.assert Z3.Distinct(*column)
end

All square contain distinct values

This is a bit more complex rearrangement, but it's all pure ruby, with Z3 getting very similar looking formula in the end:

cells.each_slice(3) do |rows|
  rows.transpose.each_slice(3) do |square|
    solver.assert Z3.Distinct(*square.flatten)
  end
end
By the way, you should really take a look at Enumerable API, it contains a lot of useful methods which will save you a ton of time.

Get the model and print it

And we're basically done:
raise "Failed to solve" unless solver.check == :sat
model = solver.model
cells.each do |row|
  puts row.map{|v| model[v]}.join(" ")
end

Getting the answer we want:

8 6 3 5 7 9 2 4 1
9 2 5 3 1 4 8 7 6
4 7 1 8 2 6 9 5 3
7 1 4 6 8 3 5 2 9
6 8 9 7 5 2 3 1 4
3 5 2 4 9 1 6 8 7
1 3 6 2 4 5 7 9 8
2 4 8 9 3 7 1 6 5
5 9 7 1 6 8 4 3 2

Avoid temptation to optimize

Everything we did here was so ridiculously straightforward - we skipped even the most obvious shortcuts.

For example why the hell did we assert that cells are between 1 and 9 even for cells whose value we know perfectly well? And why did we even create variables for them if we know their value already? If we coded any kind of manual sudoku solver, we'd probably start with those.

It turns out Z3 really doesn't care about such optimizations, and will solve your problem just as efficiently either way - but by "optimizing" your code will become more complicated, and there's a good chance you'll make an error trying to "optimize".

That's not to say Z3 is made of magic, and that there are no cases where you can help it - but that's much more likely to be by restating a problem in a different way than by some microoptimizations.

Some pitfalls

One small pitfall to remember is that Z3 values - including those returned by solved models - override all operators including == so you can't check if for example model[Z3.Int("x")] == 5 - that will just create a Z3 Bool expression.

This is necessary behaviour for some more complex use cases, but for the simplest case you'll generally want to #to_s whatever you get out of the model and go on from there.

API is subject to change

Some details will probably change in future versions to simplify things - especially everything from solver.check onwards, which will probably receive a simpler API for the most common case, but existing one will still be supported as it's necessary for some more complex situations.

Integers variables constrained to specific range are another commonly used pattern which could use some shortcuts.

Z3 is crazy powerful

This was a very simple example. You could probably write a sudoku solver yourself - even if it'd take a lot more time, and would probably perform a lot worse than Z3.

This manual approach doesn't scale - doing much bigger and much more complex problems with a constraint solver is as literally fast as just writing the constraints, while doing it manually constraint list is barely a starting point, and naive approaches get exponentially slow almost right away.

Due to unfortunate accidents of programming history constraint solvers got relegated to an obscure niche, but they really ought to be a tool in every good programmer's toolkit, and once you learn them you'll find applications for them all the time.

Saturday, October 08, 2016

Z3 Constraint Solver library for Ruby

Weazel by Lcrward from flickr (CC-ND)

Wouldn't it be awesome if you could just describe a problem to the computer, like let's say a sudoku, and it would find the answer for you?

That was the premise of "logic programming" in 1970s and 1980s, which was also one of many cases where Japan tried to do something else than everybody else, and failed, but it's not a post about history.

Logic programming suffered a lot worse than other paradigms. Lisp, Smalltalk, Perl, and friends left rich legacy of features for future programming languages to mine, but Prolog variants were all one big dead end.

Which is a bit of a shame, as some problems are really best coded by giving computer a list of constraints, and telling it to have a go at it, and it's very difficult to write a decent custom algorithm for them.

Technically you could use constraint solver libraries even without logic programming, but they were mostly only available in obscure Lisp / Prolog dialects, hard to compile on modern systems, barely documented, and/or extremely limited.

This somewhat changed with Z3 by Microsoft Research, which even got rudimentary Python interface.

But while Python is tolerable, I got tired of all the self. nonsense, and I wanted a nice Ruby DSL, so I wrote Ruby bindings. You can install them as z3 gem but first you need z3 library itself (brew install z3 or whatever works on your system).

If you want a quick look, check examples folder. If you want longer explanation, let's keep going.

Basic model

The basic Z3 workflow is:
  • create solver with Z3::Solver.new
  • create a bunch of formulas, generally starting with variables like Z3.Int("a"), Z3.Bool("b") etc. - formulas are independent of any solver, they're basically symbol trees.
  • assert some facts about those variables, like solver.assert Z3.Int("a") == Z3.Int("b") + Z3.Int("c")
  • ask solver to check if your set of formulas is satisfiable with solver.check. If it returns :sat, you're good to go.
  • get Z3::Model with solver.model
  • ask model for value of various variables with queries like model[Z3.Int("a")] etc.

Sorts

Let's go deeper, one step at a time. Values in Z3 all belong to "Sorts" which are sort of like types.

To create a Sort object, just instantiate its class, like Z3::BoolSort.new, then you can create relevant object like Z3::BoolSort.new.var("my_var") or Z3::IntSort.new.from_const(5). This seems like unnecessary indirection, and it sort of is for most common sorts, but there are fancier ones where it's necessary.

Some of the sorts are:
  • Z3::BoolSort.new - boolean variables
  • Z3::IntSort.new - integers, unlimited precision
  • Z3::RealSort.new - real numbers, unlimited precision 
  • Z3::BitvecSort.new(size) - bit vector sorts, of size bits
  • Z3::FloatSort.new(esize, ssize) - floating point sort of esize exponent bits and ssize significand bits
  • Z3::RoundingModeSort.new - floating point rounding mode sort
  • Z3::SetSort.new(elemement_sort) - set sort of elements of elemement_sort sort
  • Z3::ArraySort.new(key_sort, value_sort) - associative array sort with keys of key_sort, and values of value_sort
  • there are some more sorts in Z3 which are not yet implemented
For vast majority of uses you want Bool and Int sorts, for physics style calculations you want Real sort (generally don't use Floats for them). They have decently completely and nice APIs.

Bitvec and Float sorts should work reasonably well, but their APIs are somewhat awkward - for example a lot of Bitvec operations have separate signed and unsigned operations, so you need to do awkward things like Z3.Bitvec("a", 32).signed_gt(100) instead of more obvious but ambiguous Z3.Bitvec("a", 32) > 100.

Float APIs are even worse as a lot of operations need rounding mode passed - and most operations take rounding mode argument, and printing out results tries to use precise but unintuitive notation like 1.25B+5.

Sets, Arrays, and fancier stuff, are basically unfinished.

The obvious missing sort is any kind of finite domain integer, like Int[1..9] - in Z3 you need to create Int variable, and then tell the solver that it's within certain range, like solver.assert (Z3.Int("a") >= 1) & (Z3.Int("a") <= 9)

ASTs

You want to give Z3 a bunch of formulas, and these generally start with variables like sort.var("name"). As Z3::BoolSort.new.var("name") is fairly long, shortcuts like Z3.Bool("name") are provided.

From that point, you can construct your formulas with fairly straightforward ruby - a + b == c means exactly what you'd expect if a is Z3.Int("a") and so on.

There are of course some complications:
  • ruby won't let us override !, &&, and || - so we use ~& and | for booleans - and they have different parsing priorities so you might need to add some parentheses
  • you can't directly check if a value is equal to something else, as foo == 0 will create a z3 Bool expression not give you true/false. It's a great case where some extra operators would be useful, but for now just .to_s whatever you want to extract or check.
  • some operations don't have easy operator equivalents, so other syntax is provided. For some common examples - to assert that a bunch of values are all different, you can use Z3.Distinct(a, b, c, ...), for z3 ternary use (bool_expression).ite(if_true, if_false).
  • we'll automatically convert ruby values like true or 42 to z3 expressions, but if you try to mix sorts without converting them appropriately you'll generally get an exception
  • ASTs are interned, so every Z3.Int("a") + 2 == Z3.Int("b") is going to be the same underlying object.

Library layers

Z3 is a huge library, and it really doesn't map to anything resembling a sensible ruby API, so there are many layers here:
  • We use ffi to create Z3::VeryLowLevel interface of raw C calls. You should never use it directly.
  • After that there's Z3::LowLevel interface which basically deals with context management, array arguments, and mapping Ruby object arguments (but not return values) to FFI pointers. You should also never use it directly.
  • Then there are legitimate Ruby objects. A lot of them are wrappers around C pointers, so using SomeClass.new(c_pointer) to create them directly is not really supported. These pointers can be accessed by attributes starting with underscore (like _ast, _sort etc.), but it's all for internal use only.
  • The main reason for this anal system is that if you mess anything up, you'll get a crash. For some things Z3 library raises error which we then turn into Z3::Exception, but other things just crash your program with segmentation fault. I added a lot of checks for operations which might end with segmentation fault so you get Z3::Exception instead, but I'm sure I missed some things.
  • Reference counting memory management z3 uses is not exactly compatible with ruby, so if you use it in a very long running process like Rails server, it might cause problems. It's usually going to be no worse than Symbol interning.
  • Default interface doesn't give any guarantees for running time. The underlying z3 library has ways to restrict solver check running time, but they're not exposed in any convenient way yet.

No semantic versioning

Z3 is huge library (I only described the basic), and the gem is not only incomplete, but many APIs are fairly poor.

Until I release 1.0, any version can freely break any API, so if you want to rely on a stable version, just depend on exact one - or help me complete it faster.

The regular flow with Bool, Int, Real and Solver shouldn't break too often, but Bitvec, Float etc. feel awkward and could use a nicer API.

Pull requests wellcome.

Lessons from making mtg.wtf mobile-friendly

When you are weary,  feeling small by Michael Taggart Photography from flickr (CC-NC)
mtg.wtf is currently the best search engine for Magic: the Gathering cards. It's reasonably successful and popular, so let's go through its history quickly.

It started as a wishlist of improvements I'd like to see for magiccards.info

Then once magiccards.info stopped being updated, I decided to write a replacement. I started by wring a bunch of tests for how I'd like it to work, mostly using data from mtgjson

After I had tests, I wrote a search engine as Ruby library with command line interface. I thought about making it a standalone gem at this point, but I'm not sure how useful people would find it, considering json data is out there already.

After that I added rudimentary Rails frontend with minimal styling and put it online.

At this point it was text only, so I had to find card scans. WotC only publishes somewhat low quality scans, which are usable enough, but I wished to get something better. For about 2/3 of cards I found those higher quality scans, but it was a fairly slow and manual process. That still leaves some rare promos there's just no source of any kind - it wasn't a huge deal, as few people search specifically for them, the search engine just needed to display card with good picture instead of most recent version (which might be weirdo promo).

Then I added some CSS on-hover animations. Most cards have regular layout, but there are some cards which are double-faced, have horizontal layout ("split" cards), or have content on top and tobbom ("flip" cards). Later they even added pairs of cards which merge ("meld") into one big card with halves printed on their back face. All those cases for decent usability required either completely different display layout, or some animation support, and CSS animations turned out to be pretty easy to add.

Bootstrap

At this point I was getting feedback that it doesn't work too well on mobiles.

Well it's not too hard, we just need to add header telling mobile browsers to not be stupid:

  <meta content='width=device-width, initial-scale=1' name='viewport'>

Then I installed bootstrap, and replaced some existing layout code with bootstrap 12-column grid. It was surprisingly easy retrofit - other that bootstrap's use of .card colliding with my use of .card I don't remember any surprises. Markup looked a bit worse due to extra <div class="row">, <div class="col-md-8"> etc., but it's not too bad thanks to HAML.

Making it responsive

I have a bunch of mobile devices around, and testing on one or two devices is not too much trouble, but I wanted to make sure site will work well on everything, so I used Device Mode in Chrome Developer Tools to make sure everything works in wide range of sizes, portrait and landscape, added a bunch of responsive bootstrap tags, and I pushed the changes to the server.

  • For big screens site used desktop layout - 4 column for card picture, 6 column for card information, 2 columns for extra intformation
  • For medium screens layout was 6 column for pic, 6 column for information, and extras were hidden
  • For small screens layout was full row for card pic, and full row for card information underneath, with extras hidden
That was the most usable configuration in Chrome Device Mode.

Animation problems

There were two immediate problems - first Mobile Safari users reported that CSS animations are broken - it turned out that it required some prefixed properties for backface-visibility which nobody told Rails CSS autoprefixer about.

Second problem was that on very small screen sizes (where card was already full screen width) there was simply no space to do some of the animations - I ended up doing messy screen size queries in my CSS code matching Bootstrap size thresholds to turn them off.

And it was all wrong because "pixel" is a lie

I was quite happy with it, so I expected my users to be as well, but feedback I got was overwhelmingly negative. Users found the site unusable on mobiles, even though I crammed as much information as could reasonably fit in Chrome Device Mode.

The thing that got the most backlash was placing card text under card pic. But if I put tiny card picture next to text nobody could see anything, right? Oh wait, those mobile "pixel" numbers are total bullshit. That 100 "pixel" width is actually a lot more device pixels, so even though card looked like completely unreadable thumbnail in Chrome Device Mode, it looked just fine when I tried to use it on an actual phone.

And even if card picture was too small, so what? It's just as easy to pinch to zoom as it is to scroll down a bit - something that doesn't apply to desktops (where Cmd+ / Cmd- can zoom, but they don't focus on specific area, so they're much less useful).

This also means that 6 pic / 6 info layout on bigger devices was excessive - so I switched all non-desktop devices to 5 pic / 7 info as that's what looked best on actual mobiles and tablets I tried it on, even it looks horrible in Chrome Device Mode.

So now the site looked good on desktop (with big browser window), and on mobile devices with 2x or higher pixel density (of any size).

That doesn't cover using mobile devices with 1x pixel density (which fortunately basically nobody uses any more), or desktop browser with small window size (which happens, but worst case users can resize the window).

That leads to two big lessons:
  • Chrome Device Mode is horribly unrepresentative as it uses 1x not 2x pixel density
  • Bootstrap device width targeting is fairly unrepresentative as it ignores pixel density
Getting rid of whole-width card pic layout let me get rid of most CSS animation workarounds. There are still some for gallery view, like Nils Hamm's cards from Dragon's Maze where animations need to be aware of which column the card's in.

Soft Keyboard

Now it looked good, but I ran into one more problem. On desktop we want to start search box focused with HTML5 autofocus attribute, so user can start typing the query without clicking anything.

Now most good browsers understand opensearch annotations, so if you ever visited mtg.wtf, they'll remember it, so you can just type mtg<tab>your query from URL bar, and they'll get you straight to search results - but not everybody uses it this way.

But we don't want to autofocus on mobiles - as then huge soft keyboard appears covering half of results and making it all miserable. At least on some mobile browsers, others sensibly require click to show soft keyboard even with autofocus attribute.

Unfortunately I never found any way to query if device has real keyboard or not. So backup logic of autofocusing on home page (which has no results, so soft keyboard covering empty space is no big deal), but not on other pages (which have some result) ended up being a workable compromise.

It would be nice if someone figured out which browser/device combinations have this issue, and created some device specific autofocus library, as it can't be the only site affected by the problem.

Overall, it wasn't too bad


Even with all the surprises, it all went better than expected.

But imagine the world could have been different - once upon a time mobiles tried to create their own separate web with completely separate protocols and formats. How insane would it be if they succeeded?