Mostly the actual "run things" bit. Linting on commit and running tests on push is pretty much what I want.
If you've got something like Jenkins running, it's pretty easy to have it run all your tests every time it sees a commit to a repo. Does nice reports, etc etc too.
Mostly the actual "run things" bit. Linting on commit and running tests on push is pretty much what I want.
If you've got something like Jenkins running, it's pretty easy to have it run all your tests every time it sees a commit to a repo. Does nice reports, etc etc too.
Jesus christ do not do this.
Watching people bitch about the build server being too slow because they cannot build locally is just the dumbest thing when you have piles of developer workstations more then capable of it.
What you is make a file called "autogen.sh" and as part of it have it setup symlinks in your .git/hooks folder for the pre-commit hooks you want. The best way is to just call into the bits of your Makefile or whatever that you want run.
CI servers should be a validation check, not "the build system". The build system should always work locally as a first priority.
0
thatassemblyguyJanitor of Technical Debt.Registered Userregular
Mostly the actual "run things" bit. Linting on commit and running tests on push is pretty much what I want.
If you've got something like Jenkins running, it's pretty easy to have it run all your tests every time it sees a commit to a repo. Does nice reports, etc etc too.
Jesus christ do not do this.
Watching people bitch about the build server being too slow because they cannot build locally is just the dumbest thing when you have piles of developer workstations more then capable of it.
What you is make a file called "autogen.sh" and as part of it have it setup symlinks in your .git/hooks folder for the pre-commit hooks you want. The best way is to just call into the bits of your Makefile or whatever that you want run.
CI servers should be a validation check, not "the build system". The build system should always work locally as a first priority.
I get what you're saying here. But, sometimes it's nice to chuck the work over the fence, power down my station, go to the pub, and wait for an e-mail to tell me it's done.
Mostly the actual "run things" bit. Linting on commit and running tests on push is pretty much what I want.
If you've got something like Jenkins running, it's pretty easy to have it run all your tests every time it sees a commit to a repo. Does nice reports, etc etc too.
Jesus christ do not do this.
Watching people bitch about the build server being too slow because they cannot build locally is just the dumbest thing when you have piles of developer workstations more then capable of it.
What you is make a file called "autogen.sh" and as part of it have it setup symlinks in your .git/hooks folder for the pre-commit hooks you want. The best way is to just call into the bits of your Makefile or whatever that you want run.
CI servers should be a validation check, not "the build system". The build system should always work locally as a first priority.
Who said anything about not building locally? If you're pushing your work up, the CI system "should" be testing it then, and if you haven't run any of the tests you added (THAT YOU DID ADD RIGHT RIGHT HUH RIGHT) before hitting "push", I feel like there's something wrong a commit hook wouldn't fix.
Also, depending on how long your builds take, you just delayed "push" for quite some time perhaps, which is inconvenient.
Mostly the actual "run things" bit. Linting on commit and running tests on push is pretty much what I want.
If you've got something like Jenkins running, it's pretty easy to have it run all your tests every time it sees a commit to a repo. Does nice reports, etc etc too.
Yeah, that's exactly what we want to avoid. CI is for preparing things to deploy to staging/production, I want pre-commit tests running on feature branches.
I only ever got the ID from that Scan(), everything else was zero-valued.
TargetToken was null in the DB for all rows (working on a migration), so apparently that makes all values after it also be zero-valued? I changed that to
rows.Scan(&row.ID, &row.Data, &row.Timestamp)
...and now I get the actual DB values for Data and Timestamp too.
Mostly the actual "run things" bit. Linting on commit and running tests on push is pretty much what I want.
If you've got something like Jenkins running, it's pretty easy to have it run all your tests every time it sees a commit to a repo. Does nice reports, etc etc too.
Yeah, that's exactly what we want to avoid. CI is for preparing things to deploy to staging/production, I want pre-commit tests running on feature branches.
Yeah, that's exactly what it's doing for us. (And other things, mind.) You have a feature branch, you push it up, and while you're waiting on review Jenkins tells everyone if the tests work anywhere other than your machine, or your branch would break the build, etc. If the other devs AND Jenkins sign off on your branch, the PR for merge goes through.
Generally doesn't take more than a minute or two to build/test I'd say. Longer if you're staring at it and impatient, of course.
I think there's room for both. At my company, we rely heavily on Jenkins, because running the full test suite takes ~20 minutes (split across 27 machines). Running the full test suite on your local machine takes hours. (Devs do run tests locally, of course, but they usually run just the files they are editing.)
So, our pre-commit hook runs eslint _only on the files being checked in_, which is super fast and can catch a lot of would-be runtime errors in Javascript. Then at PR time, you run the test suite on Jenkins.
Are you a Software Engineer living in Seattle? HBO is hiring, message me.
a part of my webserver stack doesn't have a proper restart command, and I've been going about issuing the restart by manually looking up the pid, killing the process, and rerunning it
is there a unix'y way to automatically determine the pid based on process name/user, etc, so I can just write a script?
a part of my webserver stack doesn't have a proper restart command, and I've been going about issuing the restart by manually looking up the pid, killing the process, and rerunning it
is there a unix'y way to automatically determine the pid based on process name/user, etc, so I can just write a script?
You can grep a process list but I hate that method.
Use pgrep/pkill if you have em. pgrep let's you find/list processes by name/owner/etc. or finding the latest and so on. pkill will work on the same queries but actually send the signals.
+1
mightyjongyoSour CrrmEast Bay, CaliforniaRegistered Userregular
a part of my webserver stack doesn't have a proper restart command, and I've been going about issuing the restart by manually looking up the pid, killing the process, and rerunning it
is there a unix'y way to automatically determine the pid based on process name/user, etc, so I can just write a script?
You can grep a process list but I hate that method.
Use pgrep/pkill if you have em. pgrep let's you find/list processes by name/owner/etc. or finding the latest and so on. pkill will work on the same queries but actually send the signals.
I guess the other way do it would be to wrap it in a systemd service or init.d script, if that's an option.
a part of my webserver stack doesn't have a proper restart command, and I've been going about issuing the restart by manually looking up the pid, killing the process, and rerunning it
is there a unix'y way to automatically determine the pid based on process name/user, etc, so I can just write a script?
Another possibility that would make the daemon "play nice" (work like most other daemons) would be to wrap it in a startup script. Check out scripts in /etc/init.d for examples. A common tactic for programs that don't write their own pidfiles is that "myserver start" starts the program in the background, then writes the value of "$!" to "myserver.pid". Running "myserver stop" just runs "kill `cat myserver.pid`" and then erases the pidfile.
Are you a Software Engineer living in Seattle? HBO is hiring, message me.
a part of my webserver stack doesn't have a proper restart command, and I've been going about issuing the restart by manually looking up the pid, killing the process, and rerunning it
is there a unix'y way to automatically determine the pid based on process name/user, etc, so I can just write a script?
You can grep a process list but I hate that method.
Use pgrep/pkill if you have em. pgrep let's you find/list processes by name/owner/etc. or finding the latest and so on. pkill will work on the same queries but actually send the signals.
I think there's room for both. At my company, we rely heavily on Jenkins, because running the full test suite takes ~20 minutes (split across 27 machines). Running the full test suite on your local machine takes hours. (Devs do run tests locally, of course, but they usually run just the files they are editing.)
So, our pre-commit hook runs eslint _only on the files being checked in_, which is super fast and can catch a lot of would-be runtime errors in Javascript. Then at PR time, you run the test suite on Jenkins.
If you have a full test suite that takes hours on your local machine, then you're never going to run it locally, obviously. That's why you need multiple test suites!
Split it up into unit tests (which should be hella fast) and acceptance tests, get the unit test build under a minute (hopefully substantially so), and then you can get into a state where you've got much better viable pre-commit tests. That'll then encourage smaller commits and when you do break stuff, you can catch it earlier and not build atop it.
a part of my webserver stack doesn't have a proper restart command, and I've been going about issuing the restart by manually looking up the pid, killing the process, and rerunning it
is there a unix'y way to automatically determine the pid based on process name/user, etc, so I can just write a script?
As others have said, I'm pretty sure the standard thing to do here is run it under a service manager. We use supervisor at my startup. It's incredibly simple to set up and use.
I think the main reason would be that you have a deployment structure where it's a moving target and you just don't want to bother trying to nail it down with links/refs?
All my processes are either pm2 or supervisor, though.
I think there's room for both. At my company, we rely heavily on Jenkins, because running the full test suite takes ~20 minutes (split across 27 machines). Running the full test suite on your local machine takes hours. (Devs do run tests locally, of course, but they usually run just the files they are editing.)
So, our pre-commit hook runs eslint _only on the files being checked in_, which is super fast and can catch a lot of would-be runtime errors in Javascript. Then at PR time, you run the test suite on Jenkins.
If you have a full test suite that takes hours on your local machine, then you're never going to run it locally, obviously. That's why you need multiple test suites!
Split it up into unit tests (which should be hella fast) and acceptance tests, get the unit test build under a minute (hopefully substantially so), and then you can get into a state where you've got much better viable pre-commit tests. That'll then encourage smaller commits and when you do break stuff, you can catch it earlier and not build atop it.
Yeah, our unit tests are speedy as hell and run on every push to a feature branch. (And if you hit the local "go" button.)
By comparison, the single UI test I added as a placeholder that did nothing more than "Assert(true)" took 3 seconds less than the entire (very large) unit suite. These will NOT be running on every checkin...
Yeah, our unit tests are speedy as hell and run on every push to a feature branch. (And if you hit the local "go" button.)
By comparison, the single UI test I added as a placeholder that did nothing more than "Assert(true)" took 3 seconds less than the entire (very large) unit suite. These will NOT be running on every checkin...
I wrote some initial integration tests for our API that naturally took a second per API call with setup/teardown and stuff.
Then I said fuck it and did it right and mocked the actual HTTP call and inject it straight into the router so it's all code and no HTTP and now it's blazing fast.
I think the main reason would be that you have a deployment structure where it's a moving target and you just don't want to bother trying to nail it down with links/refs?
All my processes are either pm2 or supervisor, though.
That's because you are a bad boy(tm) who has still not migrated to the one true overlord......systemd!
Yesterday: put out some fires on a service because of course prod has cases that never showed up in testing/staging because why would data get malformed like that
Ended the day with things fixed in prod but the service on staging was a mess, but I said fuck it and went home.
Today: Going to fix things on staging by just wiping the DB so it can build fresh data. Except now staging is running normally?
Yeah, I have no idea. Better burn it with fire just in case, because that shit is suspicious.
It sounds as if you are testing if you are doing things "right" which is not how you should be writing tests. Malformed, bad input, malicious input are 3 things you always test as the other part of the success story.
I wasn't in a test though, it was an unknown (to me) case. This old service used numeric IDs, and I migrated that to using tokens to prevent IDOR attacks as is praxis. There's an internal service I can poll to get the token for an ID.
Except it turns out that in prod, there can exist ancient IDs that have no corresponding token, and my whole assumption was "everything has a token".
Yeah, our unit tests are speedy as hell and run on every push to a feature branch. (And if you hit the local "go" button.)
By comparison, the single UI test I added as a placeholder that did nothing more than "Assert(true)" took 3 seconds less than the entire (very large) unit suite. These will NOT be running on every checkin...
I wrote some initial integration tests for our API that naturally took a second per API call with setup/teardown and stuff.
Then I said fuck it and did it right and mocked the actual HTTP call and inject it straight into the router so it's all code and no HTTP and now it's blazing fast.
I'm going to consider myself lucky if the final batch of UI tests runs in less than 2 hours.
and if so how confident are you in that clean button?
I run into things like that all the time and it's because Xcode's "Clean" is bullshit that doesn't actually do a full clean, so I have to go and vacuum out cache files at least once a month.
and if so how confident are you in that clean button?
I run into things like that all the time and it's because Xcode's "Clean" is bullshit that doesn't actually do a full clean, so I have to go and vacuum out cache files at least once a month.
About the only thing I didn't do was nuke the repo and restart from a clean clone. I have a couple suspicions around "maybe I should reboot that poor little dev device", since I think there's some timeout happening before the tests can attach. (Not that the report says exactly what happened, mind. Just hints.)
Hell, it's Xcode; could be I fixed it by just swapping from device to sim. I think this happened to me once, and "adding another test" fixed it, even.
Posts
The internet is full of spiders.
The phone spiders have evolved
Mac/Windows compatible is pretty much the only requirement.
We have one at work that sorts the xcode project. Is unexpectedly nice! Organised!
Example-wise, I use a pre-commit hook to run linting tools. It's quite effective at catching problems before they get committed.
If you've got something like Jenkins running, it's pretty easy to have it run all your tests every time it sees a commit to a repo. Does nice reports, etc etc too.
Jesus christ do not do this.
Watching people bitch about the build server being too slow because they cannot build locally is just the dumbest thing when you have piles of developer workstations more then capable of it.
What you is make a file called "autogen.sh" and as part of it have it setup symlinks in your .git/hooks folder for the pre-commit hooks you want. The best way is to just call into the bits of your Makefile or whatever that you want run.
CI servers should be a validation check, not "the build system". The build system should always work locally as a first priority.
I get what you're saying here. But, sometimes it's nice to chuck the work over the fence, power down my station, go to the pub, and wait for an e-mail to tell me it's done.
Who said anything about not building locally? If you're pushing your work up, the CI system "should" be testing it then, and if you haven't run any of the tests you added (THAT YOU DID ADD RIGHT RIGHT HUH RIGHT) before hitting "push", I feel like there's something wrong a commit hook wouldn't fix.
Also, depending on how long your builds take, you just delayed "push" for quite some time perhaps, which is inconvenient.
Yeah, that's exactly what we want to avoid. CI is for preparing things to deploy to staging/production, I want pre-commit tests running on feature branches.
I only ever got the ID from that Scan(), everything else was zero-valued.
TargetToken was null in the DB for all rows (working on a migration), so apparently that makes all values after it also be zero-valued? I changed that to
...and now I get the actual DB values for Data and Timestamp too.
edit: bbcode parser still broken.
Yeah, that's exactly what it's doing for us. (And other things, mind.) You have a feature branch, you push it up, and while you're waiting on review Jenkins tells everyone if the tests work anywhere other than your machine, or your branch would break the build, etc. If the other devs AND Jenkins sign off on your branch, the PR for merge goes through.
Generally doesn't take more than a minute or two to build/test I'd say. Longer if you're staring at it and impatient, of course.
So, our pre-commit hook runs eslint _only on the files being checked in_, which is super fast and can catch a lot of would-be runtime errors in Javascript. Then at PR time, you run the test suite on Jenkins.
a part of my webserver stack doesn't have a proper restart command, and I've been going about issuing the restart by manually looking up the pid, killing the process, and rerunning it
is there a unix'y way to automatically determine the pid based on process name/user, etc, so I can just write a script?
You can grep a process list but I hate that method.
Use pgrep/pkill if you have em. pgrep let's you find/list processes by name/owner/etc. or finding the latest and so on. pkill will work on the same queries but actually send the signals.
I guess the other way do it would be to wrap it in a systemd service or init.d script, if that's an option.
Another possibility that would make the daemon "play nice" (work like most other daemons) would be to wrap it in a startup script. Check out scripts in /etc/init.d for examples. A common tactic for programs that don't write their own pidfiles is that "myserver start" starts the program in the background, then writes the value of "$!" to "myserver.pid". Running "myserver stop" just runs "kill `cat myserver.pid`" and then erases the pidfile.
ooohhh pgrep will definitely do the job, thanks
Split it up into unit tests (which should be hella fast) and acceptance tests, get the unit test build under a minute (hopefully substantially so), and then you can get into a state where you've got much better viable pre-commit tests. That'll then encourage smaller commits and when you do break stuff, you can catch it earlier and not build atop it.
As others have said, I'm pretty sure the standard thing to do here is run it under a service manager. We use supervisor at my startup. It's incredibly simple to set up and use.
All my processes are either pm2 or supervisor, though.
Yeah, our unit tests are speedy as hell and run on every push to a feature branch. (And if you hit the local "go" button.)
By comparison, the single UI test I added as a placeholder that did nothing more than "Assert(true)" took 3 seconds less than the entire (very large) unit suite. These will NOT be running on every checkin...
I have supervisor running gunicorn, and I can't figure out how to tell supervisor to restart gunicorn
so I've just been killing and rebooting supervisor
No, it actually isn't guaranteed to be cryptographically secure, depending on how often it's read from versus how often new entropy feeds into it.
3DS: 0473-8507-2652
Switch: SW-5185-4991-5118
PSN: AbEntropy
I wrote some initial integration tests for our API that naturally took a second per API call with setup/teardown and stuff.
Then I said fuck it and did it right and mocked the actual HTTP call and inject it straight into the router so it's all code and no HTTP and now it's blazing fast.
Are you in control of your .conf files? Retry/restart is configurable with supervisor via "autorestart". Show us the conf file if you are struggling.
That's because you are a bad boy(tm) who has still not migrated to the one true overlord......systemd!
Ended the day with things fixed in prod but the service on staging was a mess, but I said fuck it and went home.
Today: Going to fix things on staging by just wiping the DB so it can build fresh data. Except now staging is running normally?
Yeah, I have no idea. Better burn it with fire just in case, because that shit is suspicious.
Except it turns out that in prod, there can exist ancient IDs that have no corresponding token, and my whole assumption was "everything has a token".
Boom went the migration.
It is cryptographically secure. The entropy random/urandom discussions are boogeymen and experts have broken this down before.
https://www.2uo.de/myths-about-urandom/
That and you can also trigger a restart using "supervisorctl restart <name>" from a script.
I'm going to consider myself lucky if the final batch of UI tests runs in less than 2 hours.
yeah that did it
i dont know how this escaped my attention
Integrate test matcher framework.
Tests run.
Tests don't run.
Remove framework.
Tests run.
Put it back.
Tests don't run.
It's as if they stopped existing. No build failures, no warnings, just nothing happens.
Switch targets to run on a different device... Tests run.
In short: It's offering time. Hopefully not "burnt".
and if so how confident are you in that clean button?
I run into things like that all the time and it's because Xcode's "Clean" is bullshit that doesn't actually do a full clean, so I have to go and vacuum out cache files at least once a month.
About the only thing I didn't do was nuke the repo and restart from a clean clone. I have a couple suspicions around "maybe I should reboot that poor little dev device", since I think there's some timeout happening before the tests can attach. (Not that the report says exactly what happened, mind. Just hints.)
Hell, it's Xcode; could be I fixed it by just swapping from device to sim. I think this happened to me once, and "adding another test" fixed it, even.