Someone explain the purpose of JSON to me. Like I get it's a convenient way to store object metadata, but it's also kind of a hassle to navigate at times?
I like it don't get me wrong, it just seems kinda silly if you don't know the keys for the data and have to sift through everything.
But that argument can be applied to all forms of data formats. Given a binary blob, if you don't know the format/encoding, then good luck extracting any data from it.
Given an XML document, you'd navigate the DOM tree if you didn't know which nodes to look for.
JSON, at least, is human readable as bowen points out, and also relatively easy to parse by most/all languages. It's also expressive enough that it can be self documenting in a lot of cases - a human can probably figure out what well named fields mean without needing to consult a reference manual.
XML might come close, but it's a pain to parse and generate compliant documents for.
JSON is used a lot because it's relatively easy to parse, and it's especially easy for JavaScript to parse it, since its name is literally "JavaScript Object Notation." Since it's so easy for JavaScript to use it, it's kind of become a standard for web services to return it, and a ton of serializers have been written for other languages like C# to work with it.
It's also kind of easy to read, and it's nice to have a standard instead of everyone using proprietary data formats.
Steam: Spawnbroker
+6
Options
admanbunionize your workplaceSeattle, WARegistered Userregular
We're actually having an issue at work right now where a ton of our web services are written in WCF and don't return data in JSON....except we've been heavily moving toward JavaScript in recent months.
So now all the developers want to be using JavaScript, but we wrote all of our older APIs in a SOAP, non-RESTful way and we're all annoyed at everything all the time. And none of the managers want to kick off the project of rewriting our entire API because that will take a long time.
If we're not in a performance critical loop, I also appreciate JSON's ability to implicitly handle compatibility between versions. What's that? New field? Just add it in.
Existing parsers who don't know about the new field will ignore it.
Compared to a binary format where I'm constantly checking the size of the structure to make sure I've got all of it, or only some of it, flags to see what's valid, maybe tags if it's a stream of some kind - oh, what's that? Data alignment and implicit data padding is different between 32-bit and 64-bit versions because someone forgot about this? Yaaay.
I once wrote a CSS to PList converter just so I wouldn't have to write a ton of XML by hand and could just make a TextMate-style theme with a simple CSS file.
Why CSS isn't an actual theme format is a mystery. TextMate et al uses the same style of "class" selectors to pick a scope it applies to.
JSON needs comments so badly, though -- not for interchange between apps, that's fine, but for config files written in json, it is a big pain that we can't annotate them for later.
JSON is used a lot because it's relatively easy to parse, and it's especially easy for JavaScript to parse it, since its name is literally "JavaScript Object Notation." Since it's so easy for JavaScript to use it, it's kind of become a standard for web services to return it, and a ton of serializers have been written for other languages like C# to work with it.
It's also kind of easy to read, and it's nice to have a standard instead of everyone using proprietary data formats.
but of course the serialisation libraries for other languages don't actually work in exactly the same way as for javascript itself, so you end up having to write a bunch of stuff to handle your particular case anyway.
If you're going to create a logging/progress update mechanism, it would be a really good idea to have a queue or something, because progress updates can arrive rather fast. Bonus marks if, because you're worried about memory consumption, you put a reasonable max size on the queue and provide feedback on when messages are dropped.
Don't just say "Only one update can be processed at once, and I won't tell you when it's finished being processed/added to the log", and then "A new progress update will overwrite the update that's being processed, and there'll be no feedback indicating that this has occurred."
Edit: oh my god. If it's going to be accessed by multiple threads, make it thread safe please. Oh god.
Yeah jesus I got home and decided to do some more digging
I can't imagine growing up in an age where data wasn't in JSON now
JSON is just so...beautiful...
+2
Options
gavindelThe reason all your softwareis brokenRegistered Userregular
Little does this thread know that I use my unlimited insider access to the big MS to selectively fail your installs -- especially the ones that were supposed to be easy overnight batch jobs! MWU HA HA HA HA.
Yeah jesus I got home and decided to do some more digging
I can't imagine growing up in an age where data wasn't in JSON now
JSON is just so...beautiful...
It CAN be. Very often it's about as readable as that XML up there a few posts, because "we can just put EVERYTHING in!!!" is a thing many developers say to themselves. (Not that I think that XML is all that bad...)
my only problem with json is that it's strict about commas in lists
other than that, I'd be happy writing REST services that return JSON all day erry day
"Hey I need the three closest stores and their inventory"
"Okay here"
"Great, I'm going to take these and-"
"Buddy, I don't give a fuck what you're doing with that data."
Delzhand on
+3
Options
admanbunionize your workplaceSeattle, WARegistered Userregular
Simple XML is fine, it's garbage like bowen linked above that's a problem. Also, while I'm at it: fuuuuuuuck XML namespaces. Either I'm a goddamn moron (certainly an option) or namespaces are the biggest pain in the dick when it comes to dealing with XML in C#.
I've not yet really gotten to play much with JSON. It hasn't come up at work, and I've not done any personal projects with it yet.
Does C# have a native JSON parser yet? (to Google I go)
"Oh god, why does this work" is somehow more frustrating than "Oh god, why doesn't this work".
+3
Options
KakodaimonosCode fondlerHelping the 1% get richerRegistered Userregular
I'll explain what's going on. @ecco the dolphin can also correct me if I'm not quite right in my explanation. And These are all coming out of a packed binary byte structure (where everything is just a sequence of bytes). So we pull the bytes out into a struct and then we need to pull the actual data out. (This data is also in network byte ordering and since we're on a Windows OS, we need to reverse the byte ordering).
I'll start with an unsigned short.
So it'd be something like this:
public ushort count { get { return (ushort)((count2 << 8) | count1); } }
A short is 2 bytes long. So lets say we get two fields in the structure, byte2 and byte1 and they both represent an unsigned short.
So a ushort is 2 bytes long. Or 16 bits long. And each byte is 8 bits long.
So what we can do is take the high order byte - count2 and push that into an unsigned short and just push it up to the most significant 8 bits. That's what the left shit by 8 is doing. We're taking a byte (8 bits long) and pushing it into a ushort and moving it left by 8 bits. So if count2 is 11111111 then count is now 11111111 00000000.
So now we need the low bit. So we take the second byte and add that in using the bitwise or '|' . So if count1 is 00110011, then count is now 11111111 00110011
The ulong code is just working with a 64-bit long and 8 bytes. And the issue was if we didn't cast the intermediate results to an unsigned value, we'd overflow and then have incorrect values.
0
Options
OrcaAlso known as EspressosaurusWrexRegistered Userregular
I'll explain what's going on. @ecco the dolphin can also correct me if I'm not quite right in my explanation. And These are all coming out of a packed binary byte structure (where everything is just a sequence of bytes). So we pull the bytes out into a struct and then we need to pull the actual data out. (This data is also in network byte ordering and since we're on a Windows OS, we need to reverse the byte ordering).
I'll start with an unsigned short.
So it'd be something like this:
public ushort count { get { return (ushort)((count2 << 8) | count1); } }
A short is 2 bytes long. So lets say we get two fields in the structure, byte2 and byte1 and they both represent an unsigned short.
So a ushort is 2 bytes long. Or 16 bits long. And each byte is 8 bits long.
So what we can do is take the high order byte - count2 and push that into an unsigned short and just push it up to the most significant 8 bits. That's what the left shit by 8 is doing. We're taking a byte (8 bits long) and pushing it into a ushort and moving it left by 8 bits. So if count2 is 11111111 then count is now 11111111 00000000.
So now we need the low bit. So we take the second byte and add that in using the bitwise or '|' . So if count1 is 00110011, then count is now 11111111 00110011
The ulong code is just working with a 64-bit long and 8 bytes. And the issue was if we didn't cast the intermediate results to an unsigned value, we'd overflow and then have incorrect values.
That kind of bit shifting and masking should all be hidden behind functions that generalize those kinds of actions for you. Ideally templated so all masks, shifts, etc. and created at compile time for you. It should be as simple as something akin to x = read<field>().
Constructing all of it by hand is a recipe for the sorts of problems you just ran into.
Source: embedded programmer, 1/2 of what I do is reading/writing into packed data fields, and I never have to shift a damn thing myself unless I need to expand the read/write functions for a new capability.
0
Options
KakodaimonosCode fondlerHelping the 1% get richerRegistered Userregular
That was the function.
The annoying ones are the feeds that decide to do a ulong with only 6 bytes.
Posts
But that argument can be applied to all forms of data formats. Given a binary blob, if you don't know the format/encoding, then good luck extracting any data from it.
Given an XML document, you'd navigate the DOM tree if you didn't know which nodes to look for.
JSON, at least, is human readable as bowen points out, and also relatively easy to parse by most/all languages. It's also expressive enough that it can be self documenting in a lot of cases - a human can probably figure out what well named fields mean without needing to consult a reference manual.
XML might come close, but it's a pain to parse and generate compliant documents for.
an example of ugly XML code that is an industry standard for some reason
It's also kind of easy to read, and it's nice to have a standard instead of everyone using proprietary data formats.
So now all the developers want to be using JavaScript, but we wrote all of our older APIs in a SOAP, non-RESTful way and we're all annoyed at everything all the time. And none of the managers want to kick off the project of rewriting our entire API because that will take a long time.
Existing parsers who don't know about the new field will ignore it.
Compared to a binary format where I'm constantly checking the size of the structure to make sure I've got all of it, or only some of it, flags to see what's valid, maybe tags if it's a stream of some kind - oh, what's that? Data alignment and implicit data padding is different between 32-bit and 64-bit versions because someone forgot about this? Yaaay.
I really enjoy JSON. Especially in config files.
Why CSS isn't an actual theme format is a mystery. TextMate et al uses the same style of "class" selectors to pick a scope it applies to.
I guess
Naaaaah. Yaml is by far the best for configuration and it's great that most config loading tools nowadays support it.
but of course the serialisation libraries for other languages don't actually work in exactly the same way as for javascript itself, so you end up having to write a bunch of stuff to handle your particular case anyway.
I'd say it's more relevant than ever, if only because Ansible provides by far the best provisioning experience.
If you're going to create a logging/progress update mechanism, it would be a really good idea to have a queue or something, because progress updates can arrive rather fast. Bonus marks if, because you're worried about memory consumption, you put a reasonable max size on the queue and provide feedback on when messages are dropped.
Don't just say "Only one update can be processed at once, and I won't tell you when it's finished being processed/added to the log", and then "A new progress update will overwrite the update that's being processed, and there'll be no feedback indicating that this has occurred."
Edit: oh my god. If it's going to be accessed by multiple threads, make it thread safe please. Oh god.
Edit 2: gdsghvcszgjuhrfvhhjhfdcbjjggdrthhvcxdhjbhh fdgjhb gyffbjiddf argghhhhhhhh
Ok yeah I definitely get it now
Thanks! Was just a question for my own sake
About as ugly as the XSLT XML needed to convert a CCD's XML to a human-readable format. Thanks, MU.
oh my god
you too?
what XSLT are you guys using, custom written?
I can't imagine growing up in an age where data wasn't in JSON now
JSON is just so...beautiful...
It CAN be. Very often it's about as readable as that XML up there a few posts, because "we can just put EVERYTHING in!!!" is a thing many developers say to themselves. (Not that I think that XML is all that bad...)
especially since people can't decide between attributes and child elements, so they do both in the same god damned document
I wasn't actually here when it was originally written, but I think it started with baseline CCDA and CCR XSLTs, modified as needed, since apparently every EHR has to have their own dialect. They're about 2000 lines each, but there are longer ones out there! Like https://github.com/lantanagroup/stylesheets/blob/master/Stylesheets/CDA/dist/cda-combined-web.xsl
If people are given the choice between two things, they will inevitably decide "por que no los dos" and make the worst of all worlds.
XML has indentation and that's about it. The extra noise from closing tags and not having lists stand out make it much harder to read imo.
And since really only developers look at this shit...
my only problem with json is that it's strict about commas in lists
other than that, I'd be happy writing REST services that return JSON all day erry day
"Hey I need the three closest stores and their inventory"
"Okay here"
"Great, I'm going to take these and-"
"Buddy, I don't give a fuck what you're doing with that data."
You actually reminded me that JSON's strictness about double quotes vs. single quotes around strings cost me a solid day of work one time.
At least it's still the setup phase so it's not like I'm expected to produce results yet.
But best XML can be utterly gorgeous. Attributes can bring a succinctness to a document that Json can't match.
And, obviously, when it comes to a genuine documents rather than just object serialisation then XML is the winner winner.
I made a game, it has penguins in it. It's pay what you like on Gumroad.
Currently Ebaying Nothing at all but I might do in the future.
I've not yet really gotten to play much with JSON. It hasn't come up at work, and I've not done any personal projects with it yet.
Does C# have a native JSON parser yet? (to Google I go)
[ed] Apparently .NET 4.5 added a full-on System.Json namespace.
[ed Mk 2] Apparently that's for Silverlight. Humbug.
Cause one of these is right, the other, not so much.
Edit: Fuck me, look what your code has done!
Forget the implicit cast, that code is flat out wrong! Or it should be!
I'll start with an unsigned short.
So it'd be something like this:
A short is 2 bytes long. So lets say we get two fields in the structure, byte2 and byte1 and they both represent an unsigned short.
So a ushort is 2 bytes long. Or 16 bits long. And each byte is 8 bits long.
So what we can do is take the high order byte - count2 and push that into an unsigned short and just push it up to the most significant 8 bits. That's what the left shit by 8 is doing. We're taking a byte (8 bits long) and pushing it into a ushort and moving it left by 8 bits. So if count2 is 11111111 then count is now 11111111 00000000.
So now we need the low bit. So we take the second byte and add that in using the bitwise or '|' . So if count1 is 00110011, then count is now 11111111 00110011
The ulong code is just working with a 64-bit long and 8 bytes. And the issue was if we didn't cast the intermediate results to an unsigned value, we'd overflow and then have incorrect values.
That kind of bit shifting and masking should all be hidden behind functions that generalize those kinds of actions for you. Ideally templated so all masks, shifts, etc. and created at compile time for you. It should be as simple as something akin to x = read<field>().
Constructing all of it by hand is a recipe for the sorts of problems you just ran into.
Source: embedded programmer, 1/2 of what I do is reading/writing into packed data fields, and I never have to shift a damn thing myself unless I need to expand the read/write functions for a new capability.
The annoying ones are the feeds that decide to do a ulong with only 6 bytes.