Announcing microjson

If you’ve ever wanted a JSON parser that can unpack directly to fixed-extent C storage (look, ma, no malloc!) I’ve got the code for you.

The microjson parser is tiny (less than 700LOC), fast, and very sparing of memory. It is suitable for use in small-memory embedded environments and deployments where malloc() is forbidden in order to prevent leaked-memory issues.

This project is a spin-out of code used heavily in GPSD; thus, the code has been tested on dozens of different platforms in hundreds of millions of deployments.

It has two restrictions relative to standard JSON: the special JSON “null” value is not handled, and object array elements must be homogenous in type.

A programmer’s guide to building parsers with microjson is included in the distribution.

30 comments

  1. How microjson is incorporated into GPSD, or alternatively how microjson got extracted from GPSD? Submodules? git-subtree? Something else?

    1. >How microjson is incorporated into GPSD, or alternatively how microjson got extracted from GPSD? Submodules? git-subtree? Something else?

      Something else, I’m afraid. The GPSD version has some features the microjson version doesn’t, notably a built-in ability to interpret IS8601 timestamps. Conversely, microjson accepts a couple of array element types for generality that the GPSD version doesn’t use.

  2. If you send an email to douglas you-know-the-drill crockford.com, he will probably add it to the json.org page when he gets time. My RSON parser is listed there.

    1. >If you send an email to douglas you-know-the-drill crockford.com, he will probably add it to the json.org page when he gets time.

      Done, thanks for the reminder.

  3. Paul J: jsmn takes JSON and gives you a sequence of higher-level “tokens” — things like strings, numbers, arrays — and their positions in the input text. This is a generic and flexible approach, but would lead to a lot of unpleasant boilerplate in the applications microjson is designed for, where you have a schema, known in advance, that maps neatly onto simple variables and arrays and structs.

    If you’re not sure what this means, then have a look at the example code for each. They feel different to use.

    1. >I actually went one step further, so the user can define the mapping to storage in an ini-file, and then the mapping code is generated by a Perl-script.

      Heh. In GPSD, the template structures used to make microjson parse the JSON corresponding to Marine AIS packets are generated by a Python script. Which is, in turn, generated from the tables desctibing the protocol in its documentation.

      I’m turning over ideas in my head for a more general front end, some kind of declarative markup.

  4. > Something else, I’m afraid. The GPSD version has some features the microjson version doesn’t, notably a built-in ability to interpret IS8601 timestamps. Conversely, microjson accepts a couple of array element types for generality that the GPSD version doesn’t use.

    Why no common code and ifdefs (e.g. for ISO-8601 timestamps), or similar better solution for conditional compilation, if there is one…

    > Heh. In GPSD, the template structures used to make microjson parse the JSON corresponding to Marine AIS packets are generated by a Python script. Which is, in turn, generated from the tables desctibing the protocol in its documentation.

    BNF + generic parser (e.g. Marpa… though it is table based so it wouldn’t work on embedded; on the other hand has much better error reporting than yacc et al. or most recursive descent).

    BTW. how well does microjson report errors in JSON and schema?

    1. >Why no common code and ifdefs (e.g. for ISO-8601 timestamps), or similar better solution for conditional compilation, if there is one…

      I looked into the possibility. The code is small enough that the hassle cost seemed larger than tossing diffs back and forth – through the features the GPSD version doesn’t use are present there but conditioned out.

      >BTW. how well does microjson report errors in JSON and schema?

      I think the parse error reporting is pretty good. You get 22 different error codes giving fine-grained info on how the parse failed.

      The template structs are in C, the only error reporting you get on them is from the compiler. I haven’t put more effort into that yet because I’m thinking about ways to program generate them with provable correctness.

  5. The manual says integers and floats are parsed using strtol, strtoul, and strtod. To me this means that the parser may either fail on valid data, or fail to fail on invalid data, if the current C runtime locale differs from “C” (e.g. if the locale defines any kind of digit grouping or a decimal separator other than 0x2E). How do you propose handling this?

    1. >How do you propose handling this?

      GPSD’s version uses a locale-independent safe atof() function to get around this exact problem. I chose to lighten the microjson code by using native atof() and issuing this warning:

      Note that float parsing uses +atof(3)+ and is thus locale-sensitive – this
      affects whether perid or comma is used as a decimal point. If in any
      doubt, set the C numeric locale explicitly to match your data source.

  6. > If in any doubt, set the C numeric locale explicitly to match your data source.

    Why should there be any doubt? Interchange formats must be locale-independent, otherwise they are useless for interchange.

    The JSON spec only allows the hyphen-minus sign, the decimal period and no digit grouping.

    Many applications need to set the runtime locale to the user’s locale by calling setlocale(LC_ALL, “”), so as to match the user’s expectations in the UI. However, for data exchange, they still need to use the C locale. Dynamically setting the runtime locale is error-prone and may require interthread synchronization.

    By relying on the runtime-provided locale-dependent parsing functions, you make your library easy to use incorrectly and hard to use correctly.

    Consider including the locale-independent atof with the library, so that the library is correct by default, and users can strip it down if they are sure that they use the C locale.

    (The problem would be mitigated if setlocale(3) were thread-local, or if there were locale-explicit versions of strto* functions.)

    1. >By relying on the runtime-provided locale-dependent parsing functions, you make your library easy to use incorrectly and hard to use correctly.

      But shipping safe-atof makes it substantially more heavyweight.

      You have some significant points here, but this is not code for n00bs. I think the right philosophy here is to be minimalist and document how to address the problem.

  7. > But shipping safe-atof makes it substantially more heavyweight.

    I call inconsistency.

    In the G+ thread, you argue that CPU cycles are cheap and spending them in order to reduce bugs due to opaque representation is worthwhile. At the same time, here you argue that reducing bugs due to locale sensitivity is not worth shipping safe-atof.

  8. Heh. In GPSD, the template structures used to make microjson parse the JSON corresponding to Marine AIS packets are generated by a Python script. Which is, in turn, generated from the tables desctibing the protocol in its documentation.

    That’s cool. I’ve done something like that before, dumping a tabular spec for a fixed-width record scheme from Word to text, parsing that with Perl into a more amenable description of fields, widths, types, repetition, etc., and a perl module to (a) parse that kind of data, and also (b) generate a java parser, and (c) generate a c++ parser.

  9. Eh, error reporting is O.K., and I should probably not expect better from *micro* anything. Though I’d rather the error response include where it failed, why it failed, and optionally what was expected.

    Also WTF with no manpages, or technical documentation? For example what is the last argument of json_read_object() for?

    1. >Also WTF with no manpages, or technical documentation? For example what is the last argument of json_read_object() for?

      Ouch. OK, the programmer’s tutorial didn’t cover that. After parsing, the pointer to the rest of the text is deposited there. That’s so you can do multiple parses out of the same buffer.

      I’ll fix…

  10. >> […] no manpages, or technical documentation? For example what is the last argument of json_read_object() for?

    > Ouch. OK, the programmer’s tutorial didn’t cover that. After parsing, the pointer to the rest of the text is deposited there. That’s so you can do multiple parses out of the same buffer.

    Ah, so it is similar to how strtol/strtoll/strtoul/strtod family of function works. And it allows for better error recovery – you know where the wrong input is (it would be nice if it was in tutorial, c.f. example in strtol(3)).

    BTW. how well does microjson handle backslash escaping, encoded UTF-8 (e.g. “\uD83D\uDE02”) and NUL byte (encoded as “\u0000”) in keys and values?

    1. >BTW. how well does microjson handle backslash escaping, encoded UTF-8 (e.g. “\uD83D\uDE02?) and NUL byte (encoded as “\u0000?) in keys and values?

      It does what you’d expect with C strings underneath. Most backslash escapes interpreted properly, \u literals above 0xFF are truncated, “\u0000” will drop a NUL in the string that client code will likely interpret as a terminator.

  11. > \u literals above 0xFF are truncated

    That’s probably what one should expect from *mini*-library… converting Unicode characters to UTF-8 (or, better, chosen encoding), taking into account stuff like surrogate pairs (UTF-16 like) for characters outside BMP, would be too much. For example converting “\uD83D\uDE02” to U+1F602 character ‘????’, utf-8 encoded as “\360\237\230\202”, four bytes: F0 9F 98 82

    1. >converting Unicode characters to UTF-8 […] would be too much.

      Yes. In the original application context there was no reason to expect anything but printable ASCII.

  12. >Yes. In the original application context there was no reason to expect anything but printable ASCII.

    One of the key design features of UTF-8 (and its derivatives, including the Modified UTF-8 that encodes a literal null as 0xC080, in violation of the rules of strict UTF-8, in order to avoid the null being interpreted as a string terminator) is that it can be fed to a program that expects printable “8-bit” ASCII (including the extended 0x80-FF chars in the definition of “printable”) and handle it without any fumbling.

    If you need UTF-8, write the appropriate conversion filters and feed the UTF-8 to ?json, with another filter handling what it outputs. The mechanics of how ?json operates would not need to be changed.

  13. > One of the key design features of UTF-8 (and its derivatives, including the Modified UTF-8 that encodes a literal null as 0xC080, in violation of the rules of strict UTF-8, in order to avoid the null being interpreted as a string terminator) is that it can be fed to a program that expects printable “8-bit” ASCII (including the extended 0x80-FF chars in the definition of “printable”) and handle it without any fumbling.

    I think the problem is with proper handling of \uXXXX encoding in JSON… which can include UTF-16 like surrogate pairs for characters outside BMP.

Leave a comment

Your email address will not be published. Required fields are marked *