For those who don't know, forums.penny-arcade.com will be closing soon. However, we're doing the same kind of stuff over at coin-return.org with (almost) all the same faces! Please do feel welcome to
join us.
For those who don't know, forums.penny-arcade.com will be closing soon. However, we're doing the same kind of stuff over at coin-return.org with (almost) all the same faces! Please do feel welcome to
join us.
For those who don't know, forums.penny-arcade.com will be closing soon. However, we're doing the same kind of stuff over at coin-return.org with (almost) all the same faces! Please do feel welcome to
join us.
For those who don't know, forums.penny-arcade.com will be closing soon. However, we're doing the same kind of stuff over at coin-return.org with (almost) all the same faces! Please do feel welcome to
join us.
[Programming] Page overflow to new thread
Posts
I've got an array where each element has a count and a score. I want to index the array using some other array J of indices, then increment each element's count by 1 and score by S[n], where S is an array of the same length as J.
In code:
I was hoping Numpy would recognize that a) the value 1 can be broadcast to match the shape of a[idx], and b) the shape of scores is the same as the shape of a[idx]. But instead it raises an exception:
I can just increment the two fields separately:
But I feel like that's gotta have worse cache behaviour, hitting the indexed elements twice. How can I access each element just once?
I saw in the Lego thread they make a raspberry pi that integrates with technic stuff. You know, if you're into that sort of thing.
(I used some the other day to simulate stat arrays from the new paizo playtest dice rules)
I don't know Numpy, but I think the cache performance of the second example will be the same as the first example (assuming there's a way to do what you want), because the two will probably compile to the same sequence of instructions that are executed by the CPU.
The cache behavior depends more on the location of the elements to which you're adding: if they're in a contiguous block of memory or all in the same cache line (or contiguous cache lines) then the first time you read one of the elements they'll all be loaded into the L1 cache (prefetching would pull in adjacent cache lines). If they're not next to each other in memory then you'll have a cache miss for each element that you read no matter which code design you use (unless you or Numpy manually insert prefetch instructions into the code before you perform the addition, this would cause the elements to be fetched from RAM concurrent with you executing the first arithmetic operation).
By "access each element just once" I'm guessing you mean you'd like to use a single CPU instruction to perform all the additions for the elements at '[idx]'? Put another way: one instruction reads all the elements at '[idx]', a second instruction adds '(1, scores)', and then a third writes the block of elements back to memory. You'll only be able to get something like that if Numpy uses SIMD instructions behind the scenes. If it doesn't, then it's going to individually read each number from memory, perform the addition, then write the number back to memory. And again, this case depends entirely on how 'a' is laid out in memory.
(SIMD is Single Instruction Multi Data, this lets you do things like add to vectors of the same length together in a single CPU Op. There's also MIMD (Multi Instruction Multi Data), but I don't know if x86 has those).
TLDR: Cache performance will almost certainly be the same for both versions of you're code, so don't worry about it and just focus on which version is most readable and easiest to get working.
Good old software-engineer-to-woodworker pipeline still putting up numbers. “Woodworker” in this case being a term that can expand to mean fly fishing, or cooking, or music making, or basically any activity where the things, once done, stay accomplished, and nothing involved can ever need an “update” to “dependencies”. I, myself, have taken up sailing on a little boat from 1985 that has one car battery to power the lights, and a depth finder/knot meter with an LCD screen. I can, to my mild displeasure, still get cell service out on the lake but I try hard not to remember that I have it.
Your Ad Here! Reasonable Rates!
Everything requires maintenance. Don't boats especially need a bit of upkeep and cleaning?
Oh certainly they do, but to stretch the analogy to (or past!) its breaking point, there is a world of difference between “this line looks a little frayed, I should swap it end for end and check these blocks and guides, make sure nothing is dragging on it” and “due to decisions made elsewhere to serve the needs of others, only boats with YELLOW sails are now able to engage the wind. Please install or fabricate YELLOW sails to continue using your boat”, which is the not-woodworking nature of software development.
Your Ad Here! Reasonable Rates!
I don't want to harp on this but "this very specific part on my boat broke and to identify it we have to stare at a 30-year-old rusted over serial number and then figure out if that manufacturer (a) still exists and (b) still makes the part or an acceptable replacement, because if we can't I have to tear out the entire plumbing system" is specifically a problem that boat owners have to deal with.
source: partner worked at a marine supply store for 10ish years.
But on my boat, there isn’t even an inboard diesel. There’s a forty year hold Honda 9.9HP outboard for which I have the shop manual, and a 12V breaker panel with six circuits labeled things like “running lights” and “depth sounder”. I do a fair amount of maintenance and even tinkering to improve things, but the very edge of the “inscrutable bullshit” envelope for systems under my responsibility is servicing a manual sheet winch or maybe giving a carburetor a bath in some Gum-Out. I do think we have reached the end of where the analogy serves the topic, though. Tonight I will dream an uncomfortable dream of plumbing maintenance in tight fiberglass spaces.
Your Ad Here! Reasonable Rates!
Its the requirement.
Why is it the requirement?
The PM set the requirement.
Which PM?
He's not with the company anymore.
So why are we doing this?
Because its the requirement.
I confess I've had a few instances of "how many senior engineers does it take to unfuck this weird Git situation we ended up in somehow?", yes.
1. rename existing repo to "repo.backup".
2. make new clone of repo
3. manually copy-paste changes over between files
I'm sure there's better ways, but I _trust_ this approach to not mess me up in confusing ways, whereas every time I've tried to do things properly I just make it worse.
I have "senior" somewhere randomly in my title... this is still my go-to "unfuck git" approach.
Long as hell branches with multiple mainline merges ( or even worse spider merges into them ) just suck.
Like everything else KISS.
If you want a clean commit history, make a pretty commit message when you squash everything into a single merge commit.
The fact that rebase is a feature that only does what it’s supposed to during a specific phase of the lifecycle of a branch, and ruins things for everybody when used after that point, but is in no way documented as such nor constrained by any sort of controls to prevent you from making all your colleagues’ lives worse with a ton of force pushes and history changing, is peak git IMO.
Your Ad Here! Reasonable Rates!
a) you shouldn't have long lived feature branches anyway so virtually nothing should be hanging off extant branches. Release branches can get cherry-picked to
b) you should really be using something like gerrit rather than force pushing and you can just run it in rebase/cherry-pick mode which does all of this automatically
c) there's little value in all the intermediate "quick fix" "forgot a file" commits that you lose in amend/rebase workflows
“Oh cool! What does your architecture do when the database goes down?”
Every time I was hoping to meet someone who was doing cool things with distributed systems. Without fail, I got instead an uncomfortable silence and a new conversation partner.
Your Ad Here! Reasonable Rates!
The difference between "we're migrating to a microservices architecture because it solves a clear problem we have" and "we're migrating to a microservices architecture because our CEO/CTO/tech lead* read a white paper over the weekend."
*I know we prefer to roast non-technical management but never underestimate the ability of devs to be self-destructive