I just discovered a pattern in my project's codebase that I don't know how to feel about:
foreach (var item in itemList ?? new List<Item>())
It's certainly a textually compact way of handling a possible null reference, but going out of the way to create an empty list just so that you can not iterate through it feels wrong.
The new empty list only gets created if itemList is nullish, right? I would hope that happens rarely enough that it isn't a performance consideration, but I don't know why the pattern would be doing that instead of just making sure you're not throwing nulls around by accident.
If itemList is referenced anywhere else following that code then it's null check once and done. If you've got an empty list then iterating over it doesn't do anything, as opposed to trying to iterate over null, or getting an item from the empty list returns nothing, as opposed to trying to perform operations on null.
It's something I prefer to do - instead of checking for null values everywhere you just do it once and then you know everything further down just works.
Performance wise the compiler is way smarter than you and an empty struct takes up a negligible amount of space so it's not even worth considering and produces nicley readable code.
In this case, I think the performance concern is less about the iterator itself and more about the allocation of a new list instance. That puts collection pressure on the garbage collector, which might slow you down if you're doing it on a tight loop.
A better way to do the same thing would be (I'm assuming this is C#, since it looks like it):
foreach (var item in itemList ?? Enumerable.Empty<Item>())
... since that's written to use a static instance, so every time you ask for an empty set of type T, you get the same empty set to enumerate over.
Java also has Collections.emptyList() for similar functionality.
In this case the code is very sparsely invoked, and the list is never used again. It's used for running a filter on the output of another filtered method as part of a single operation inside a client library, so the performance is almost irrelevant. I'm just vaguely bothered by the way that it's done.
I may just switch it to Enumerable.Empty(), as per your suggestion. It probably won't even gain anything since it's quite possible that any given instance will only actually invoke it once, but it'll make me feel better.
edit: I just realized that it's set up to parallel this line in a similar method:
foreach (var item2 in item2Collection ?? new Item2Collection())
I suppose I could Enumerable.Empty<Item2>() that as well, since only the enumerability matters here. Or I could just not touch any of it and continue to be very slightly annoyed until I completely forget about it in another day or so.
I’ve been shipping a React Native app as either sole developer or team lead since the Year of Our Lord the Two-thousand Seventeenth, and I don’t know the answer to 1), but I’m pretty sure the answer to 2) is, for some reason, to make the gesture where you hold one hand out parallel to the floor and waggle it back and forth along the axis of your arm while saying “eeeeeeeeeeeh?”
In re: the empty list discussion, my initial reaction without any context is that you shouldn’t silently handle a null reference like that unless you know for goddamned sure that nobody else can ever use “itemList”, like maybe you’re working on the very edge of your system and getting maybe-something from a network call, etc. Of course, once I learned the context my opinion changed to “just chain calls to List.Where and let the runtime care about your maybe-something” but I don’t know C#/.NET hardly at all and maybe Linq isn’t something you get to use where you live.
ive long been skeptical about everything concerning react and react native
but i finally got around to getting an environment set up and i have a couple of questions....
1) what the fuck?
2) what.... ....... the fuck???? ?
Your confusion confuses me. React isn't generally that confusing.
If you're not used to a functional programming style, it can be (it was for me). I actually found Redux in combination with React to be far more confusing.
TelMarine on
3ds: 4983-4935-4575
0
thatassemblyguyJanitor of Technical Debt.Registered Userregular
edited December 2022
Huh, JetBrains is end-of-life'ing AppCode. Bummer. Was hoping to keep a consistent development echo system with them.
e: Ahh, I see. From the comments in the blog post, they want the iOS developers to switch to their universal Kotlin mobile app solution (like Microsoft's Xamarin with Java/C#).
ive long been skeptical about everything concerning react and react native
but i finally got around to getting an environment set up and i have a couple of questions....
1) what the fuck?
2) what.... ....... the fuck???? ?
Your confusion confuses me. React isn't generally that confusing.
If you're not used to a functional programming style, it can be (it was for me). I actually found Redux in combination with React to be far more confusing.
JSX is a violation of natural law
it stirs in me the instincts of fear that one might have at the sight of a poisonous snake in their yard, or a bug in their soup
there's something primal going on in my mind when i see unescaped html tags assigned to javascript variables
it doesn't matter that it works, it's *wrong*, and i cannot interact with it in much the same way a cat might be violently reluctant to enter a shallow bathtub
I love Ruby so much. Application distribution has always been one of my biggest problems with Ruby for non-server uses. But I discovered this weekend that you can use the embedded interpreter mRuby as a language for the TIC-80 fantasy console (in addition to Lua, JS, Wren, WASM and other fun embedded languages). Gonna make some tiny 8-bit games over Christmas break.
ive long been skeptical about everything concerning react and react native
but i finally got around to getting an environment set up and i have a couple of questions....
1) what the fuck?
2) what.... ....... the fuck???? ?
Your confusion confuses me. React isn't generally that confusing.
Good reactive code (regardless of framework) isn't all that awful. (Though I don't prefer it.)
It is (IMO) spectacularly easy to write bad reactive code, though. Side effects everywhere, load-bearing or otherwise, and a near-impossibility of looking at an object or file and understanding everything that's interacting with it without a lot of digging. One could say "well just don't write bad code", but I'm firmly on the side of using tools that make it as hard as possible for you to fuck up, and reactive anything makes it easier in my experience; possibly less of an issue if you're doing pure web work, but I'm still on Team Seatbelt.
ive long been skeptical about everything concerning react and react native
but i finally got around to getting an environment set up and i have a couple of questions....
1) what the fuck?
2) what.... ....... the fuck???? ?
Your confusion confuses me. React isn't generally that confusing.
Good reactive code (regardless of framework) isn't all that awful. (Though I don't prefer it.)
It is (IMO) spectacularly easy to write bad reactive code, though. Side effects everywhere, load-bearing or otherwise, and a near-impossibility of looking at an object or file and understanding everything that's interacting with it without a lot of digging. One could say "well just don't write bad code", but I'm firmly on the side of using tools that make it as hard as possible for you to fuck up, and reactive anything makes it easier in my experience; possibly less of an issue if you're doing pure web work, but I'm still on Team Seatbelt.
At least the tooling for React is pretty good...at least on the web side.
0
Ear3nd1lEärendil the Mariner, father of ElrondRegistered Userregular
I'm fighting with stupid reactive variables right now with React hooks. Ugh
0
AkimboEGMr. FancypantsWears very fine pants indeedRegistered Userregular
edited December 2022
During my brief experience with React, I was unreasonably upset by the lack of versioned documentation. I mean, it existed at some point, but all the links were broken for a good few months before I submitted the bug report, and remained broken for a few months after when I eventually left the particular project. They may well still be, for all I know.
Absurdly high user numbers, zero documentation. Just baffling.
AkimboEG on
Give me a kiss to build a dream on; And my imagination will thrive upon that kiss; Sweetheart, I ask no more than this; A kiss to build a dream on
It gets so much worse if you have to use it in anger long enough to survive a major version bump. Ever found a bug and submitted a GitHub issue, then complied with all the aggressively-cheerful linter bots that auto-reject any issue that doesn’t have all the telemetry the devs have decided they need, and a minimal reproduction case, and so on, only to be told “we think this is fixed in <new version that has breaking changes> of <obscure transitive dependency>, please update (your entire project) and try it again”?
I have.
eyelid twitches
hlprmnky on
_
Your Ad Here! Reasonable Rates!
+4
KakodaimonosCode fondlerHelping the 1% get richerRegistered Userregular
I am both horrified and intrigued to learn that you can recursively call CONCAT in an Excel sheet to get around the maximum number of arguments limitation of CONCAT.
Wanna see some shit? How about Ph.D and Infotron founder Felienne Hermans giving a whole Strange Loop talk about how Excel is Turing complete? (Link is to recording of a 2014 conference talk) https://youtu.be/0CKru5d4GPk
I was lucky enough to see this in person and I think the talk but also the conference probably broke me in subtle and enduring ways; within 18 months I had changed jobs and was writing Clojure full time.
Wanna see some shit? How about Ph.D and Infotron founder Felienne Hermans giving a whole Strange Loop talk about how Excel is Turing complete? (Link is to recording of a 2014 conference talk) https://youtu.be/0CKru5d4GPk
I was lucky enough to see this in person and I think the talk but also the conference probably broke me in subtle and enduring ways; within 18 months I had changed jobs and was writing Clojure full time.
Someone here years ago posted a similar video but it was about PowerPoint.
3ds: 4983-4935-4575
0
thatassemblyguyJanitor of Technical Debt.Registered Userregular
I am both horrified and intrigued to learn that you can recursively call CONCAT in an Excel sheet to get around the maximum number of arguments limitation of CONCAT.
I may have come up with the dumbest Postgres hack ever. You may need to close the thread after this to stop the forbidden knowledge getting out.
So. Postgres has the notion of parallel queries. If you do a big old select statement on a big old table with a big old join it will split it across worker threads and then coalesce the results. This is cool and good.
However. Postgres will not do any parallel operations if there is a delete or update involved in the query. You are shit out of a luck and get single threaded execution.
This sucks (but there are very obvious reason why in the general case it would be very dicey to allow the query to do parallel shit so I am not too cross). In my case I have two data sets, a table A of 4 billion rows and a table B of 14 million rows that is a subset of of the 4 billion row table. I want to delete from A everything in B - in my specific case it would be perfectly safe to do this operation in parallel but I can't get that so I am stuck with serial access from disk. This sucks.
The slow part is the reading table A data from disk into memory (and not the write to WAL as I first thought). If all the data is already in the shared_buffer then the it can delete a million rows in a minute (as opposed to 20-30 minutes if it is doing it cold). So what if... what if before running the DELETE (single threaded, slow, sad) I first run a SELECT (parallel, fast, joyous) to get the data into the shared_buffer?
It totally works. This actually works. Per million row deletion time falls to 6 minutes. Hahahahahahaha!
(This is a write master database that only does isolated bulk operations so I don't need to worry about other transactions wiping out my cached data)
I wouldn't be surprised if that was somewhat designed in as you should never be doing deletes cold without confirming the statement with a select first.
1. A lot of Google source code I've seen uses single character variable names. Why? These suck and don't tell you anything about what they do, I'm surprised they allow it.
2. I continue to see a lot of documentation (in general) that when referring to a URL, do NOT use "example.com", this includes big tech companies (such as Google). The whole point of example.com is to use in documentation as it cannot be registered. The other random stuff that people come up with could be registered by a malicious actor and your documentation would be leading people unwittingly to said site.
1. A lot of Google source code I've seen uses single character variable names. Why? These suck and don't tell you anything about what they do, I'm surprised they allow it.
Are we talking about, like, loop variables? Source code delivered to the browser that has gone through a minifier preprocessor? Something else?
1. A lot of Google source code I've seen uses single character variable names. Why? These suck and don't tell you anything about what they do, I'm surprised they allow it.
Are we talking about, like, loop variables? Source code delivered to the browser that has gone through a minifier preprocessor? Something else?
Not minified and not a loop variable, just standard code, usually Golang source for Kubernetes or related.
1. A lot of Google source code I've seen uses single character variable names. Why? These suck and don't tell you anything about what they do, I'm surprised they allow it.
Are we talking about, like, loop variables? Source code delivered to the browser that has gone through a minifier preprocessor? Something else?
Not minified and not a loop variable, just standard code, usually Golang source for Kubernetes or related.
There's your answer. The go people think the ideal identifier should be no longer than necessary. No I said necessary. Remove another character. Excellent
1. A lot of Google source code I've seen uses single character variable names. Why? These suck and don't tell you anything about what they do, I'm surprised they allow it.
Are we talking about, like, loop variables? Source code delivered to the browser that has gone through a minifier preprocessor? Something else?
Not minified and not a loop variable, just standard code, usually Golang source for Kubernetes or related.
There's your answer. The go people think the ideal identifier should be no longer than necessary. No I said necessary. Remove another character. Excellent
This is the sort of mentality that makes me look to hammers as a corrective measure.
1. A lot of Google source code I've seen uses single character variable names. Why? These suck and don't tell you anything about what they do, I'm surprised they allow it.
Are we talking about, like, loop variables? Source code delivered to the browser that has gone through a minifier preprocessor? Something else?
Not minified and not a loop variable, just standard code, usually Golang source for Kubernetes or related.
There's your answer. The go people think the ideal identifier should be no longer than necessary. No I said necessary. Remove another character. Excellent
I find myself confused with the term "stream". Such as "streaming data", "data stream", "stream over data", "stream from a file/database", or any such phrase including "stream" in a computer context. Maybe some here can help clear some of my confusion (paging @Phyphor ).
My confusion is I'm not sure what's so different about it compared to just having a list that holds data or some data structure you can loop over. What's the difference between looping over a list and reading from a stream? Is the idea that a stream has no known size so you read pieces at a time until you get null or equivalent? However, you can also "stream" over a list with a known size or a file with a known size (such as in Java and Go and probably others). Is reading a file say 4KB at a time considered "streaming"? Is that equivalent to "reading the file"? Or is loading the entire file into a single data structure different than "streaming" it? Is the idea with a array/list that you can immediately access a specific piece of data via index but in a stream you can't since it may not have a known size?
In Java 8 Stream was added, allowing you to do some functional programming style code. It lets you apply functions to each element in a Stream (of which a Stream can be made from an existing data structure, like a List), but how's that different from looping over that same List with a standard for loop, and just calling functions on each element?
I think I'm getting caught on definition. A list holds some data of a specific size. A "stream" then is...what?
I don't really know (very much not a go person) but I'm going to guess that it's for processing data that either doesn't fit or would require too much memory or even just independent elements
If you don't need to seek around at random then you can process elements as they arrive. That you can turn standard containers into them is fine, it just lets you write it once as a more generic thing and feed in-memory data through it
So in Java specifically, “streams” can be a reference to the Stream API, but also across languages the general concept is called “streams” or sometimes “a data pipeline”. There’s actually a really nice summary in that docs page, which as a quarter-century user of the JVM and consumer of Javadocs, surprised no one more than me:
Collections and streams, while bearing some superficial similarities, have different goals. Collections are primarily concerned with the efficient management of, and access to, their elements. By contrast, streams do not provide a means to directly access or manipulate their elements, and are instead concerned with declaratively describing their source and the computational operations which will be performed in aggregate on that source.
“Computational operations” here is something like “filter” or “reduce”, and usually there are both sequential and parallel versions of these available and the library code for handling streams makes guarantees about safe parallelism, which can be nice to not have to care about. As Phyphor points out, if you can express all the work you want to do on a data structure in terms of such operations, or write you own higher-order function which fulfills the same contract and does something specific to your use case, then treating even something you know to be a list held in working memory today as a stream means that, tomorrow when your frenemy/colleague turns it into a blocking queue coming from The Cloud, your code will not need to be changed.
I find myself confused with the term "stream". Such as "streaming data", "data stream", "stream over data", "stream from a file/database", or any such phrase including "stream" in a computer context. Maybe some here can help clear some of my confusion (paging @Phyphor ).
My confusion is I'm not sure what's so different about it compared to just having a list that holds data or some data structure you can loop over. What's the difference between looping over a list and reading from a stream? Is the idea that a stream has no known size so you read pieces at a time until you get null or equivalent? However, you can also "stream" over a list with a known size or a file with a known size (such as in Java and Go and probably others). Is reading a file say 4KB at a time considered "streaming"? Is that equivalent to "reading the file"? Or is loading the entire file into a single data structure different than "streaming" it? Is the idea with a array/list that you can immediately access a specific piece of data via index but in a stream you can't since it may not have a known size?
In Java 8 Stream was added, allowing you to do some functional programming style code. It lets you apply functions to each element in a Stream (of which a Stream can be made from an existing data structure, like a List), but how's that different from looping over that same List with a standard for loop, and just calling functions on each element?
I think I'm getting caught on definition. A list holds some data of a specific size. A "stream" then is...what?
Yeah, like you said, one way of thinking about this is that you most likely only get access to a chunk of the data at a time, and don't necessarily know how much there is. If you started with a list and turned it into a stream, then you did know how much data there was when you started, but streams-in-general might not.
Compare, say, downloading an MP4 file to play, where I have the whole thing at once and can seek back and forth -vs- streaming video, where I just point my browser at the XYZ tv website, and it sends me packets of video, which get displayed, and that goes on forever, and if I want to watch what was happening an hour ago I probably can't just go back there, I get to watch what's being sent out now.
Or in the case of youtube, even though the size of video is already known, and I can skip back and forwards, it still doesn't send me the whole file when I want to watch it, it streams the file to me in chunks.
Something to think about that might make it less awkward, cause "streaming" data might fit into your process memory just fine: Streams are a way to work with data that doesn't need to fit into your local memory.
It might actually be small amounts of data but it's the method, not the size, that makes it streaming.
Posts
In this case the code is very sparsely invoked, and the list is never used again. It's used for running a filter on the output of another filtered method as part of a single operation inside a client library, so the performance is almost irrelevant. I'm just vaguely bothered by the way that it's done.
I may just switch it to Enumerable.Empty(), as per your suggestion. It probably won't even gain anything since it's quite possible that any given instance will only actually invoke it once, but it'll make me feel better.
edit: I just realized that it's set up to parallel this line in a similar method:
I suppose I could Enumerable.Empty<Item2>() that as well, since only the enumerability matters here. Or I could just not touch any of it and continue to be very slightly annoyed until I completely forget about it in another day or so.
but i finally got around to getting an environment set up and i have a couple of questions....
1) what the fuck?
2) what.... ....... the fuck???? ?
we also talk about other random shit and clown upon each other
In re: the empty list discussion, my initial reaction without any context is that you shouldn’t silently handle a null reference like that unless you know for goddamned sure that nobody else can ever use “itemList”, like maybe you’re working on the very edge of your system and getting maybe-something from a network call, etc. Of course, once I learned the context my opinion changed to “just chain calls to List.Where and let the runtime care about your maybe-something” but I don’t know C#/.NET hardly at all and maybe Linq isn’t something you get to use where you live.
Your Ad Here! Reasonable Rates!
Accurate. Very accurate.
Your confusion confuses me. React isn't generally that confusing.
If you're not used to a functional programming style, it can be (it was for me). I actually found Redux in combination with React to be far more confusing.
e: Ahh, I see. From the comments in the blog post, they want the iOS developers to switch to their universal Kotlin mobile app solution (like Microsoft's Xamarin with Java/C#).
Double bummer.
JSX is a violation of natural law
it stirs in me the instincts of fear that one might have at the sight of a poisonous snake in their yard, or a bug in their soup
there's something primal going on in my mind when i see unescaped html tags assigned to javascript variables
it doesn't matter that it works, it's *wrong*, and i cannot interact with it in much the same way a cat might be violently reluctant to enter a shallow bathtub
we also talk about other random shit and clown upon each other
Good reactive code (regardless of framework) isn't all that awful. (Though I don't prefer it.)
It is (IMO) spectacularly easy to write bad reactive code, though. Side effects everywhere, load-bearing or otherwise, and a near-impossibility of looking at an object or file and understanding everything that's interacting with it without a lot of digging. One could say "well just don't write bad code", but I'm firmly on the side of using tools that make it as hard as possible for you to fuck up, and reactive anything makes it easier in my experience; possibly less of an issue if you're doing pure web work, but I'm still on Team Seatbelt.
At least the tooling for React is pretty good...at least on the web side.
Absurdly high user numbers, zero documentation. Just baffling.
I have.
eyelid twitches
Your Ad Here! Reasonable Rates!
https://youtu.be/0CKru5d4GPk
I was lucky enough to see this in person and I think the talk but also the conference probably broke me in subtle and enduring ways; within 18 months I had changed jobs and was writing Clojure full time.
Your Ad Here! Reasonable Rates!
Someone here years ago posted a similar video but it was about PowerPoint.
I think that's just called "currying".
So. Postgres has the notion of parallel queries. If you do a big old select statement on a big old table with a big old join it will split it across worker threads and then coalesce the results. This is cool and good.
However. Postgres will not do any parallel operations if there is a delete or update involved in the query. You are shit out of a luck and get single threaded execution.
This sucks (but there are very obvious reason why in the general case it would be very dicey to allow the query to do parallel shit so I am not too cross). In my case I have two data sets, a table A of 4 billion rows and a table B of 14 million rows that is a subset of of the 4 billion row table. I want to delete from A everything in B - in my specific case it would be perfectly safe to do this operation in parallel but I can't get that so I am stuck with serial access from disk. This sucks.
The slow part is the reading table A data from disk into memory (and not the write to WAL as I first thought). If all the data is already in the shared_buffer then the it can delete a million rows in a minute (as opposed to 20-30 minutes if it is doing it cold). So what if... what if before running the DELETE (single threaded, slow, sad) I first run a SELECT (parallel, fast, joyous) to get the data into the shared_buffer?
It totally works. This actually works. Per million row deletion time falls to 6 minutes. Hahahahahahaha!
(This is a write master database that only does isolated bulk operations so I don't need to worry about other transactions wiping out my cached data)
I made a game, it has penguins in it. It's pay what you like on Gumroad.
Currently Ebaying Nothing at all but I might do in the future.
1. A lot of Google source code I've seen uses single character variable names. Why? These suck and don't tell you anything about what they do, I'm surprised they allow it.
2. I continue to see a lot of documentation (in general) that when referring to a URL, do NOT use "example.com", this includes big tech companies (such as Google). The whole point of example.com is to use in documentation as it cannot be registered. The other random stuff that people come up with could be registered by a malicious actor and your documentation would be leading people unwittingly to said site.
Are we talking about, like, loop variables? Source code delivered to the browser that has gone through a minifier preprocessor? Something else?
Not minified and not a loop variable, just standard code, usually Golang source for Kubernetes or related.
There's your answer. The go people think the ideal identifier should be no longer than necessary. No I said necessary. Remove another character. Excellent
This is the sort of mentality that makes me look to hammers as a corrective measure.
then a screenshot of the CSV file opened up in notepad with the escape quotes circled in red
email also specified that the file "worked fine in excel though, but please remove these hidden characters"....
this is..... this is podracing......
we also talk about other random shit and clown upon each other
Ran into this yesterday:
https://github.com/golang/go/blob/master/src/strings/reader.go#L39
Who allowed "prevRune"? Disappointing.
My confusion is I'm not sure what's so different about it compared to just having a list that holds data or some data structure you can loop over. What's the difference between looping over a list and reading from a stream? Is the idea that a stream has no known size so you read pieces at a time until you get null or equivalent? However, you can also "stream" over a list with a known size or a file with a known size (such as in Java and Go and probably others). Is reading a file say 4KB at a time considered "streaming"? Is that equivalent to "reading the file"? Or is loading the entire file into a single data structure different than "streaming" it? Is the idea with a array/list that you can immediately access a specific piece of data via index but in a stream you can't since it may not have a known size?
In Java 8 Stream was added, allowing you to do some functional programming style code. It lets you apply functions to each element in a Stream (of which a Stream can be made from an existing data structure, like a List), but how's that different from looping over that same List with a standard for loop, and just calling functions on each element?
I think I'm getting caught on definition. A list holds some data of a specific size. A "stream" then is...what?
If you don't need to seek around at random then you can process elements as they arrive. That you can turn standard containers into them is fine, it just lets you write it once as a more generic thing and feed in-memory data through it
“Computational operations” here is something like “filter” or “reduce”, and usually there are both sequential and parallel versions of these available and the library code for handling streams makes guarantees about safe parallelism, which can be nice to not have to care about. As Phyphor points out, if you can express all the work you want to do on a data structure in terms of such operations, or write you own higher-order function which fulfills the same contract and does something specific to your use case, then treating even something you know to be a list held in working memory today as a stream means that, tomorrow when your frenemy/colleague turns it into a blocking queue coming from The Cloud, your code will not need to be changed.
Your Ad Here! Reasonable Rates!
Yeah, like you said, one way of thinking about this is that you most likely only get access to a chunk of the data at a time, and don't necessarily know how much there is. If you started with a list and turned it into a stream, then you did know how much data there was when you started, but streams-in-general might not.
Compare, say, downloading an MP4 file to play, where I have the whole thing at once and can seek back and forth -vs- streaming video, where I just point my browser at the XYZ tv website, and it sends me packets of video, which get displayed, and that goes on forever, and if I want to watch what was happening an hour ago I probably can't just go back there, I get to watch what's being sent out now.
Or in the case of youtube, even though the size of video is already known, and I can skip back and forwards, it still doesn't send me the whole file when I want to watch it, it streams the file to me in chunks.
It might actually be small amounts of data but it's the method, not the size, that makes it streaming.