If you’ve ever wanted a JSON parser that can unpack directly to fixed-extent C storage (look, ma, no malloc!) I’ve got the code for you.
The microjson parser is tiny (less than 700LOC), fast, and very sparing of memory. It is suitable for use in small-memory embedded environments and deployments where malloc() is forbidden in order to prevent leaked-memory issues.
This project is a spin-out of code used heavily in GPSD; thus, the code has been tested on dozens of different platforms in hundreds of millions of deployments.
It has two restrictions relative to standard JSON: the special JSON “null” value is not handled, and object array elements must be homogenous in type.
A programmer’s guide to building parsers with microjson is included in the distribution.
How microjson is incorporated into GPSD, or alternatively how microjson got extracted from GPSD? Submodules? git-subtree? Something else?
>How microjson is incorporated into GPSD, or alternatively how microjson got extracted from GPSD? Submodules? git-subtree? Something else?
Something else, I’m afraid. The GPSD version has some features the microjson version doesn’t, notably a built-in ability to interpret IS8601 timestamps. Conversely, microjson accepts a couple of array element types for generality that the GPSD version doesn’t use.
How does this differ from jsmn (http://zserge.com/jsmn.html)? I believe that also does not require malloc.
>How does this differ from jsmn (http://zserge.com/jsmn.html)? I
Hm. It appears jsmn is *only* a parser. microjson goes further and unmarshals the parsed values into C storage as it parses.
If you send an email to douglas you-know-the-drill crockford.com, he will probably add it to the json.org page when he gets time. My RSON parser is listed there.
>If you send an email to douglas you-know-the-drill crockford.com, he will probably add it to the json.org page when he gets time.
Done, thanks for the reminder.
Paul J: jsmn takes JSON and gives you a sequence of higher-level “tokens” — things like strings, numbers, arrays — and their positions in the input text. This is a generic and flexible approach, but would lead to a lot of unpleasant boilerplate in the applications microjson is designed for, where you have a schema, known in advance, that maps neatly onto simple variables and arrays and structs.
If you’re not sure what this means, then have a look at the example code for each. They feel different to use.
It’s funny, I wrote a reader-dumper lib for ini-files with exactly the same philosophy. :-) I actually went one step further, so the user can define the mapping to storage in an ini-file, and then the mapping code is generated by a Perl-script.
https://github.com/subogero/uCini
https://github.com/subogero/uCini-test
>I actually went one step further, so the user can define the mapping to storage in an ini-file, and then the mapping code is generated by a Perl-script.
Heh. In GPSD, the template structures used to make microjson parse the JSON corresponding to Marine AIS packets are generated by a Python script. Which is, in turn, generated from the tables desctibing the protocol in its documentation.
I’m turning over ideas in my head for a more general front end, some kind of declarative markup.
Seriously awesome code here — worthy of suckless.org.
The resource page contains a typo: “sexurity”.
> Something else, I’m afraid. The GPSD version has some features the microjson version doesn’t, notably a built-in ability to interpret IS8601 timestamps. Conversely, microjson accepts a couple of array element types for generality that the GPSD version doesn’t use.
Why no common code and ifdefs (e.g. for ISO-8601 timestamps), or similar better solution for conditional compilation, if there is one…
> Heh. In GPSD, the template structures used to make microjson parse the JSON corresponding to Marine AIS packets are generated by a Python script. Which is, in turn, generated from the tables desctibing the protocol in its documentation.
BNF + generic parser (e.g. Marpa… though it is table based so it wouldn’t work on embedded; on the other hand has much better error reporting than yacc et al. or most recursive descent).
BTW. how well does microjson report errors in JSON and schema?
>Why no common code and ifdefs (e.g. for ISO-8601 timestamps), or similar better solution for conditional compilation, if there is one…
I looked into the possibility. The code is small enough that the hassle cost seemed larger than tossing diffs back and forth – through the features the GPSD version doesn’t use are present there but conditioned out.
>BTW. how well does microjson report errors in JSON and schema?
I think the parse error reporting is pretty good. You get 22 different error codes giving fine-grained info on how the parse failed.
The template structs are in C, the only error reporting you get on them is from the compiler. I haven’t put more effort into that yet because I’m thinking about ways to program generate them with provable correctness.
The manual says integers and floats are parsed using strtol, strtoul, and strtod. To me this means that the parser may either fail on valid data, or fail to fail on invalid data, if the current C runtime locale differs from “C” (e.g. if the locale defines any kind of digit grouping or a decimal separator other than 0x2E). How do you propose handling this?
>How do you propose handling this?
GPSD’s version uses a locale-independent safe atof() function to get around this exact problem. I chose to lighten the microjson code by using native atof() and issuing this warning:
If you don’t already have a cheap, malloc free way of formatting the streams to your console while developing/debugging, here’s one that I wrote a couple years ago:
https://github.com/bsouther/jsonf
> If in any doubt, set the C numeric locale explicitly to match your data source.
Why should there be any doubt? Interchange formats must be locale-independent, otherwise they are useless for interchange.
The JSON spec only allows the hyphen-minus sign, the decimal period and no digit grouping.
Many applications need to set the runtime locale to the user’s locale by calling setlocale(LC_ALL, “”), so as to match the user’s expectations in the UI. However, for data exchange, they still need to use the C locale. Dynamically setting the runtime locale is error-prone and may require interthread synchronization.
By relying on the runtime-provided locale-dependent parsing functions, you make your library easy to use incorrectly and hard to use correctly.
Consider including the locale-independent atof with the library, so that the library is correct by default, and users can strip it down if they are sure that they use the C locale.
(The problem would be mitigated if setlocale(3) were thread-local, or if there were locale-explicit versions of strto* functions.)
>By relying on the runtime-provided locale-dependent parsing functions, you make your library easy to use incorrectly and hard to use correctly.
But shipping safe-atof makes it substantially more heavyweight.
You have some significant points here, but this is not code for n00bs. I think the right philosophy here is to be minimalist and document how to address the problem.
> But shipping safe-atof makes it substantially more heavyweight.
I call inconsistency.
In the G+ thread, you argue that CPU cycles are cheap and spending them in order to reduce bugs due to opaque representation is worthwhile. At the same time, here you argue that reducing bugs due to locale sensitivity is not worth shipping safe-atof.
That’s cool. I’ve done something like that before, dumping a tabular spec for a fixed-width record scheme from Word to text, parsing that with Perl into a more amenable description of fields, widths, types, repetition, etc., and a perl module to (a) parse that kind of data, and also (b) generate a java parser, and (c) generate a c++ parser.
Eh, error reporting is O.K., and I should probably not expect better from *micro* anything. Though I’d rather the error response include where it failed, why it failed, and optionally what was expected.
Also WTF with no manpages, or technical documentation? For example what is the last argument of json_read_object() for?
>Also WTF with no manpages, or technical documentation? For example what is the last argument of json_read_object() for?
Ouch. OK, the programmer’s tutorial didn’t cover that. After parsing, the pointer to the rest of the text is deposited there. That’s so you can do multiple parses out of the same buffer.
I’ll fix…
>> […] no manpages, or technical documentation? For example what is the last argument of json_read_object() for?
> Ouch. OK, the programmer’s tutorial didn’t cover that. After parsing, the pointer to the rest of the text is deposited there. That’s so you can do multiple parses out of the same buffer.
Ah, so it is similar to how strtol/strtoll/strtoul/strtod family of function works. And it allows for better error recovery – you know where the wrong input is (it would be nice if it was in tutorial, c.f. example in strtol(3)).
BTW. how well does microjson handle backslash escaping, encoded UTF-8 (e.g. “\uD83D\uDE02”) and NUL byte (encoded as “\u0000”) in keys and values?
>BTW. how well does microjson handle backslash escaping, encoded UTF-8 (e.g. “\uD83D\uDE02?) and NUL byte (encoded as “\u0000?) in keys and values?
It does what you’d expect with C strings underneath. Most backslash escapes interpreted properly, \u literals above 0xFF are truncated, “\u0000” will drop a NUL in the string that client code will likely interpret as a terminator.
> \u literals above 0xFF are truncated
That’s probably what one should expect from *mini*-library… converting Unicode characters to UTF-8 (or, better, chosen encoding), taking into account stuff like surrogate pairs (UTF-16 like) for characters outside BMP, would be too much. For example converting “\uD83D\uDE02” to U+1F602 character ‘????’, utf-8 encoded as “\360\237\230\202”, four bytes: F0 9F 98 82
>converting Unicode characters to UTF-8 […] would be too much.
Yes. In the original application context there was no reason to expect anything but printable ASCII.
>Yes. In the original application context there was no reason to expect anything but printable ASCII.
One of the key design features of UTF-8 (and its derivatives, including the Modified UTF-8 that encodes a literal null as 0xC080, in violation of the rules of strict UTF-8, in order to avoid the null being interpreted as a string terminator) is that it can be fed to a program that expects printable “8-bit” ASCII (including the extended 0x80-FF chars in the definition of “printable”) and handle it without any fumbling.
If you need UTF-8, write the appropriate conversion filters and feed the UTF-8 to ?json, with another filter handling what it outputs. The mechanics of how ?json operates would not need to be changed.
Speaking of not handling non-ASCII, I guess “?” doesn’t work, but μ probably does.
> One of the key design features of UTF-8 (and its derivatives, including the Modified UTF-8 that encodes a literal null as 0xC080, in violation of the rules of strict UTF-8, in order to avoid the null being interpreted as a string terminator) is that it can be fed to a program that expects printable “8-bit” ASCII (including the extended 0x80-FF chars in the definition of “printable”) and handle it without any fumbling.
I think the problem is with proper handling of \uXXXX encoding in JSON… which can include UTF-16 like surrogate pairs for characters outside BMP.
NITEMS() is #define’d but unused; make CC=clang noticed “(int)fputs(…)”