This is kind of an oddball question. I'm dealing with a pretty simple Historian SQL table but has potentially many, many records and it doesn't seem to be optimized for searching against the timestamp column terribly.
Is there a more efficent way to search for the first record that occurred before a specific timestamp than doing something like
SELECT TOP 1 timestamp, value FROM tbl1
WHERE tagname = 'tag1' and timestamp < '2017-07-24 12:00:00'
ORDER BY timestamp DESC
I've tried playing around with max(timestamp) in a different version of the query above and it doesn't seem to process timestamps correctly, constantly returning me a record in February instead of the one it should find that happened three days ago. I suppose I could constraint the start and end times it could ultimately search for in the table but the issue is the last previous record outside of the search timestamp could potentially be many months in the past. Changing the ordering is also giving it some overhead as well and I am no DBA or SQL ace so I wouldn't mind some help on this if anybody has previously tackled a similar issue..
edit:
Ended up solving it by leveraging some SET commands in historian SQL, changing it to search backwards from the starting timestamp.
• Significant performance improvements on Windows, massively reducing initial load time and improving UI response for the vast majority of users during testing.
Their definition of "significant" differs greatly from mine.
Also, it locked up the 2nd time I opened it this morning. (old man yelling @ clouds) See, this is what happens when you give OSS devs a GUI to build.
Windows has been failing to boot on my home machine since the last update. With Microsoft's new update policy and how buggy Windows 10 seems to be, I never know if my hardware is failing or if Windows 10 is just fucking with me.
Thanks, Microsoft!
Edit: By failing to boot, I mean sometimes the computer will start and everything is working fine, but my display will just fail to turn on. Everything else appears normal and it POSTs fine.
Hey I think I have this issue, too!
Like it will boot up and the screen will be blank. Usually works fine on reboot.
+1
Options
gavindelThe reason all your softwareis brokenRegistered Userregular
I either have a system breaking bug of catastrophic proportions...or a false positive test failure. Which one? Why not both? :bzz:
You define a layout matching the string you're trying to parse a time/date from. You need to use those exact values for day/hour/year/minute etc, because that's what Go looks at to map values in the string against.
The mnemonic is that the base string is " 01/02 03:04:05PM '06 -0700", but honestly I just get more confused by "15:04" than "H:m".
You define a layout matching the string you're trying to parse a time/date from. You need to use those exact values for day/hour/year/minute etc, because that's what Go looks at to map values in the string against.
The mnemonic is that the base string is " 01/02 03:04:05PM '06 -0700", but honestly I just get more confused by "15:04" than "H:m".
wondering if this is answerable by the resident C guys, even though it's ObjC I imagine the principle is the same
do I incur any performance overhead by including extra headers in a given class?
I'm getting to that point where I have some files which have dozens of imports and I want to clean it up with some consolidated import header files, but I don't know if that could potentially hurt my app by increasing memory overhead somehow
In straight c I would think it would only have performance implications at compile time, but c doesn't have classes so I imagine it's different for objc.
Compiler performance sure, though the parsers tend to be extremely efficient these days, files routinely pull in megabytes of headers. Runtime performance no, unless ObjC is weird or the linker is terrible enough to not prune unnecessary imports
yeah I doubt apple is pouring a ton of time into compiler optimizations, especially with swift around now
I guess it depends on what you mean by "compiler optimizations", but since they're one of the primary contributors to clang/LLVM (one of their ex-guys founded the project), they do a lot of work on C language stuff. In recent years, they've also been adding language features to Obj-C in order to make it play nicer with Swift. They also use C++ heavily internally. In fact, the Obj-C runtime was written in C++ the last time I checked (it's open source).
Both iOS and macOS are effectively BSD distros, so as nice as Swift is, Apple's not going to be skimping on C and its kin any time soon.
Why is my build not triggering when I change the source code, I ask myself? It says "enabled", right?
oh.
thanks, TFS.
(edit: no, I am dumb, it actually _does_ trigger when I change the source code, I'd just forgotten to include another path; if I want to disable it then I unslide that slider. It is still confusing, just in the other direction)
I've been learning Javascript/Typescript lately, and there are some things about it that are driving me nuts when I look at existing code. The most recent one is defining a constant class variable and then immediately changing the values of its members. I assume that it's a hidden pointer thing and that all it's doing is preventing the pointer as a whole from being reassigned to something else, but it's still giving me a headache.
Yes, that is exactly what it is doing. The reference is a const, but the contents of the reference are not. If you really want the behavior you are expecting you can call freeze on the object, which will prevent any properties from being added/removed/changed.
doing my first load test on a server identical to my production environment
load in a big batch of data... mostly goes fine... start visiting some pages, and I notice that the requests aren't... failing... or timing out... it just seems to stop serving the page abruptly at random points and random times
odd
this is all my first time in using django, nginx, gunicorn, postgresql, etc... so I jog through some theories about what might cause this behavior to begin my search
nothing super obvious comes up in about 30 minutes of digging through config files (i had first suspected maybe a default setting in a part of the server stack was rate/size limited somehow)
and then the most obvious answer dawns on me...
> top
... Mem: ... 23M Free ...
ah yes
that... explains everything
tomorrow, the great hunt to figure out where all my memory went
• Significant performance improvements on Windows, massively reducing initial load time and improving UI response for the vast majority of users during testing.
Their definition of "significant" differs greatly from mine.
Also, it locked up the 2nd time I opened it this morning. (old man yelling @ clouds) See, this is what happens when you give OSS devs a GUI to build.
PgAdmin 4 has got to be the worst UI I have ever used, bar none. I don't understand how you can release something so obviously broken, and regressed from previous versions, with a straight face.
i continue to be mesmerized by FPGA and wish i had a valid reason to ever try my hand with one
It's interesting, but holy shit tooling is primitive. Like, if you thought embedded toolchains blow goats, you've never seen an FPGA build get an hour in and then fail. Or the build complete, but because you misspelled something it inferred a wire and you just broke the build.
i continue to be mesmerized by FPGA and wish i had a valid reason to ever try my hand with one
It's interesting, but holy shit tooling is primitive. Like, if you thought embedded toolchains blow goats, you've never seen an FPGA build get an hour in and then fail. Or the build complete, but because you misspelled something it inferred a wire and you just broke the build.
Ugh.
And build times are atrocious.
I haven't played with FPGAs in ages. I have even been thinking of firing up an AWS EC2 FPGA instance to play around... But then I remember just what you said.
i continue to be mesmerized by FPGA and wish i had a valid reason to ever try my hand with one
It's interesting, but holy shit tooling is primitive. Like, if you thought embedded toolchains blow goats, you've never seen an FPGA build get an hour in and then fail. Or the build complete, but because you misspelled something it inferred a wire and you just broke the build.
Ugh.
And build times are atrocious.
I haven't played with FPGAs in ages. I have even been thinking of firing up an AWS EC2 FPGA instance to play around... But then I remember just what you said.
Oh my gosh
I'm embarrassed to admit that I didn't know AWS EC2 FPGA instances were a thing!
And they've done the tedious parts of hooking the FPGAs up to a PCIe bus, and adding DDR!
This is weirdly awesome!
Edit: Partially related: I tell you what though. I find that it's much easier (and more critical) to do unit tests because of the massive compile times.
It's very easy to justify spending 2-4 hours writing extensive unit tests for a module, if those tests will save me 1-2 recompiles in the future.
Sure would be nice if people at least compiled their code before pushing to master. Don't (clap) break (clap) the (clap) build (clap)
a5ehren on
+3
Options
KakodaimonosCode fondlerHelping the 1% get richerRegistered Userregular
We've gone with public shaming. If the master build breaks, whomever checked on the broken code gets to wear the hat of shame for the rest of the day or until someone else breaks the build.
lol, it is actually 2 different commits that break the build in different places. Good work guys. You took the time to rebase, but couldn't bother to compile your code.
0
Options
thatassemblyguyJanitor of Technical Debt.Registered Userregular
I've occasionally broken things by writing and testing code, reviewer requests changes, and either the requested changes were logically wrong or I just fat fingered and re-uploaded without checking
I've occasionally broken things by writing and testing code, reviewer requests changes, and either the requested changes were logically wrong or I just fat fingered and re-uploaded without checking
Yeah, I've done this as well but this particular repo does not have mandatory reviews on.
Posts
Is there a more efficent way to search for the first record that occurred before a specific timestamp than doing something like
I've tried playing around with max(timestamp) in a different version of the query above and it doesn't seem to process timestamps correctly, constantly returning me a record in February instead of the one it should find that happened three days ago. I suppose I could constraint the start and end times it could ultimately search for in the table but the issue is the last previous record outside of the search timestamp could potentially be many months in the past. Changing the ordering is also giving it some overhead as well and I am no DBA or SQL ace so I wouldn't mind some help on this if anybody has previously tackled a similar issue..
edit:
Ended up solving it by leveraging some SET commands in historian SQL, changing it to search backwards from the starting timestamp.
Hey I think I have this issue, too!
Like it will boot up and the screen will be blank. Usually works fine on reboot.
You define a layout matching the string you're trying to parse a time/date from. You need to use those exact values for day/hour/year/minute etc, because that's what Go looks at to map values in the string against.
The mnemonic is that the base string is " 01/02 03:04:05PM '06 -0700", but honestly I just get more confused by "15:04" than "H:m".
No, that doesn't make sense. :rotate:
:rotate: :rotate: :rotate:
Your function is swallowing errors.
do I incur any performance overhead by including extra headers in a given class?
I'm getting to that point where I have some files which have dozens of imports and I want to clean it up with some consolidated import header files, but I don't know if that could potentially hurt my app by increasing memory overhead somehow
I 100% agree with you, but the emphasis is mine.
it's beautiful
I guess it depends on what you mean by "compiler optimizations", but since they're one of the primary contributors to clang/LLVM (one of their ex-guys founded the project), they do a lot of work on C language stuff. In recent years, they've also been adding language features to Obj-C in order to make it play nicer with Swift. They also use C++ heavily internally. In fact, the Obj-C runtime was written in C++ the last time I checked (it's open source).
Both iOS and macOS are effectively BSD distros, so as nice as Swift is, Apple's not going to be skimping on C and its kin any time soon.
oh.
thanks, TFS.
(edit: no, I am dumb, it actually _does_ trigger when I change the source code, I'd just forgotten to include another path; if I want to disable it then I unslide that slider. It is still confusing, just in the other direction)
doing my first load test on a server identical to my production environment
load in a big batch of data... mostly goes fine... start visiting some pages, and I notice that the requests aren't... failing... or timing out... it just seems to stop serving the page abruptly at random points and random times
odd
this is all my first time in using django, nginx, gunicorn, postgresql, etc... so I jog through some theories about what might cause this behavior to begin my search
nothing super obvious comes up in about 30 minutes of digging through config files (i had first suspected maybe a default setting in a part of the server stack was rate/size limited somehow)
and then the most obvious answer dawns on me...
> top
... Mem: ... 23M Free ...
ah yes
that... explains everything
tomorrow, the great hunt to figure out where all my memory went
PgAdmin 4 has got to be the worst UI I have ever used, bar none. I don't understand how you can release something so obviously broken, and regressed from previous versions, with a straight face.
I think... I think one of our devs has been releasing some FPGA images that don't meet worst case fMax under some conditions.
Their excuse: it's just the slow model that's failing.
Quick link for those who are blessed enough to ignore fMax.
And here I am sitting on 1k of FIR coefficients for just one of the digital filters in the system!
Timing violations are great
If you like nondeterministic builds
Personally, I nondeterministic love builds.
Do you, really?
It's interesting, but holy shit tooling is primitive. Like, if you thought embedded toolchains blow goats, you've never seen an FPGA build get an hour in and then fail. Or the build complete, but because you misspelled something it inferred a wire and you just broke the build.
Ugh.
And build times are atrocious.
I haven't played with FPGAs in ages. I have even been thinking of firing up an AWS EC2 FPGA instance to play around... But then I remember just what you said.
Oh my gosh
I'm embarrassed to admit that I didn't know AWS EC2 FPGA instances were a thing!
And they've done the tedious parts of hooking the FPGAs up to a PCIe bus, and adding DDR!
This is weirdly awesome!
Edit: Partially related: I tell you what though. I find that it's much easier (and more critical) to do unit tests because of the massive compile times.
It's very easy to justify spending 2-4 hours writing extensive unit tests for a module, if those tests will save me 1-2 recompiles in the future.
Even better, basically that whole office is on PTO today and off Monday, so they broke the fucking build for 4 days.
Is your company big enough to invest in CI that leverages "git request-pull" in an automated fashion?
Because that's civilization right there.
This project uses Bamboo, actually, but they don't have it setup to do real CI - just nightlies.
Yeah, I've done this as well but this particular repo does not have mandatory reviews on.