I never bother testing that kind of stuff. I know that my individually injected components are working because they're tested, but I'm not interested in testing what strings them together.
Worst case I go "oops, forgot to inject the Foo repository."
Anyone ever done heavy test-driven development with C#? I'm having an interesting problem.
In my quest to get close to 100% code coverage, I extracted almost all of my logic out of my main method into a Bootstrapper class. This is the class that registers my services and creates the service provider. So far so good.
How...do I mock this class? I understand that I'm supposed to unit test everything, but I am having a hard time figuring out how to unit test the entry point into the program without...actually calling the Start() method.
Is this a thing like recursion where I need to just do it and I'll see that it's easier than I thought in a self-referential sort of way? How do I mock a dependency that is responsible for...registering all of my dependencies?
Edit: If the answer to this question is "You don't.", that's acceptable. I'm just trying to follow TDD as sort of a practice exercise for myself and I'm having a hard time actually testing that Start() method. Everything it calls is tested, just not itself.
Are the services yours, or are they external?
If this class interfaces with external libraries, you either
(1) don't test the functionality which explicitly deals with external libraries or
(2) interface out an adapter for those external libraries, test calls to the adapter with a MockAdapter, and don't unit test the adapter
You don't unit test the entry point into the program because the entry point should be paper thin. Here's (as far as I understand it) how I'd structure it.
interface IService{
void mobilizeAndStart();
}
class MockService{
public int mobilizeAndStartCallCount = 0;
public void mobilizeAndStart(){
mobilizeAndStartCallCount++;
}
}
class Bootstrapper{
public IService firstService;
public IService secondService;
public Bootstrapper(IService first, IService second) {
firstService = first;
secondService = second;
}
public virtual void start(){
first.mobilizeAndStart();
second.mobilizeAndStart();
}
}
class MockBootstrapper{
public int startCallCount = 0;
public bool useBase = false;
public override void start(){
startCallCount++;
if(useBase){
basse.start();
}
}
}
void main(){
Bootstrapper bootstrapper = new Bootstrapper(new ProductionService(), new OtherProductionService());
}
void testThatNeedsABootstrapper(){
MockService service1 = new MockService();
MockService service2 = new MockService();
MockBootstrapper bootstrapper = new Bootstrapper(service1, service2);
bootstrapper.useBase = true;
Assert.AreEqual(0, service1.mobilizeAndStartCallCount);
Assert.AreEqual(0, service2.mobilizeAndStartCallCount);
bootstrapper.start();
Assert.AreEqual(1, service1.mobilizeAndStartCallCount);
Assert.AreEqual(1, service1.mobilizeAndStartCallCount);
//Whatever else
}
Just a couple notes on that.
First of all, the asserts wouldn't ever be there because you don't test a mock object usually, which is basically what that test is doing, if you were doing actual testing that bootstrapper would be a real one not a mock one.
However, that mock will let you make a bootstrapper that will either call through to its child services or not. You can also add methods to it to fully return private members which will not be available to the real bootstrapper if you need: I've made them public for convenience in the example.
If each of your services is different and interfacing them wouldn't make sense, then mock out each of your services, or allow them to be null.
Also, a big question: Why do you need a mock bootstrapper? Is it to set up your test environment? If so I get that, but most of your unit tests shouldn't need to interact with the bootstrapper, since you should just be calling into your various objects directly most of the time, unless your architecture mandates doing that indirectly (as it actually does with my game, where much of the code lives in systems which are all called in order by a black box object, meaning I have to set up a situation which will trigger the system, then fire them off, then test to make sure it's run)
Oh I'm mostly just doing this for fun. It's the only part of my program that isn't unit tested. But based on the quirks of .NET Core, it probably doesn't need to be tested.
I don't really need a mock bootstrapper. I'm just trying to get to 100% code coverage because it's a fun exercise, but I'm willing to accept that the class that registers my services with the .NET service provider doesn't need to be tested.
Edit: I can give you a link to the github if you're super curious, but I'm sure you can find it without too much trouble :P
I swear to god, when you work in "devops" what it actually means is "spend hours watching CI builds over and over to make your you got them right".
What I want is a reverse debugger which tracks all state as a CI system running over, I guess a KVM instance? And then just let me write the build script interactively till I get it right.
Then also add a "tarnish" system for secrets so the second I log off everything resets.
after a few weeks of fiddling and dusty tomes pertaining to Docker + Django I finally have a plausible configuration for my server that runs in a production mode
I was really worried that Docker would somehow incur a huge performance penalty for me but so far, the performance seems very acceptable, even in line with my expectations
this is a huge relief.. i feel like i am living in.................... the future
after a few weeks of fiddling and dusty tomes pertaining to Docker + Django I finally have a plausible configuration for my server that runs in a production mode
I was really worried that Docker would somehow incur a huge performance penalty for me but so far, the performance seems very acceptable, even in line with my expectations
this is a huge relief.. i feel like i am living in.................... the future
You are using python, mate. The performance penalty is baked in.
after a few weeks of fiddling and dusty tomes pertaining to Docker + Django I finally have a plausible configuration for my server that runs in a production mode
I was really worried that Docker would somehow incur a huge performance penalty for me but so far, the performance seems very acceptable, even in line with my expectations
this is a huge relief.. i feel like i am living in.................... the future
I've got a bunch of Django stuff running in docker containers, it works great. My project which goes live this weekend (I'll pop it into my signature once it's live) is a next.js/react front end with a Django + django channels + graphene server powering the front end and iOS apps all running on AWS ECS.
so for a ruby on rails work project we're ingesting CSVs and creating instances of the location model for each row in the CSV. only issue is the CSV is generated by a third party and so the header names are often different.
I can tweak the header names we search for in the code base but the end goal is to let a non-programmer upload the CSV. what are my options? My first thought was to give the user a form to input what header name corresponds to 'address 1' or 'opening hours' or whatever.
I guess I could also expose the attribute names on the upload form and let the user edit the CSV file so they match
My wife did this exact thing in Ruby at work.
She took the approach of uploading the csv on the first page, then the second page displays drop down selections for each attribute to fill out.
It's worked pretty well, given all the different data sources.
Yeah, this. Whenever I've dealt with CSV imports the import is always a two-step thing. During step one you gather the headers and a random selection of cells from that column and then in step two you display a table with the example data and have a dropdown containing all the header fields above so the user can then manually map them.
It's trivial to do a best-guess process where it'll try and match "address1" to "Address 1/Address-1/Address_1", etc, but for the most part you just want them to define the mapping and have a final say of "yes, this is fine"
Then when there's a problem with the import it's their fault and not the magic black box
The one about the fucking space hairdresser and the cowboy. He's got a tinfoil pal and a pedal bin
I guess I could also expose the attribute names on the upload form and let the user edit the CSV file so they match
My wife did this exact thing in Ruby at work.
She took the approach of uploading the csv on the first page, then the second page displays drop down selections for each attribute to fill out.
It's worked pretty well, given all the different data sources.
Yeah, this. Whenever I've dealt with CSV imports the import is always a two-step thing. During step one you gather the headers and a random selection of cells from that column and then in step two you display a table with the example data and have a dropdown containing all the header fields above so the user can then manually map them.
It's trivial to do a best-guess process where it'll try and match "address1" to "Address 1/Address-1/Address_1", etc, but for the most part you just want them to define the mapping and have a final say of "yes, this is fine"
Then when there's a problem with the import it's their fault and not the magic black box
I have never hit agree faster than when I saw your last paragraph
I never bother testing that kind of stuff. I know that my individually injected components are working because they're tested, but I'm not interested in testing what strings them together.
Worst case I go "oops, forgot to inject the Foo repository."
So you do only unit testing, and no integration testing?
My 20 years of development have pushed me quite the opposite direction. I find unit testing individual components to be marginally useful, outside of testing basic algorithms and business logic. I find the worst bugs are found, and then tested for, as you start to glue the components together. I've become a huge proponent of edge to edge testing in server stacks.
"An even more professional approach is to leave the assertions in
the code when you ship, and to automatically file a bug report
on behalf of the end user and perhaps to try to re-start the
application every time an assertion fails."
I do not have words to describe "It's okay, we'll just crash the application when we run into a bug."
Assertions are by definition assumptions that your code relies upon, if they fail you either would have crashed anyway or you produce wrong results, which might be worse
This is the attitude of most kernels - you see anything wrong you panic. Call a function at the wrong priority level, bluescreen. Return an invalid error code, bluescreen. Do anything with a bad handle, bluescreen. Cause a page fault, bluescreen
Assertions are by definition assumptions that your code relies upon, if they fail you either would have crashed anyway or you produce wrong results, which might be worse
This is the attitude of most kernels - you see anything wrong you panic. Call a function at the wrong priority level, bluescreen. Return an invalid error code, bluescreen. Do anything with a bad handle, bluescreen. Cause a page fault, bluescreen
Depends. Is your assertion in a place where you WOULD crash? (Unhandled nil, etc) Is it in a place where something would go "wrong" but remain recoverable for your user? (Incorrect data, weird UI, etc.) If you'd crash anyway in that spot, fine, whatever. If you'd only otherwise have a degraded user experience, don't make it worse by going "bang".
I mean how pissed would you be if Chrome crashed every time there was a page rendering error, or iTunes/Spotify/YouTube crashed when they hit buffering for too long? (Er. Mac iTunes only, I have heard stories about the Windows version.)
He definitely goes overboard yes, but he's not wrong that there are a lot of low value unit tests
Take the copy constructor test example. Did you also remember to update the move constructor? All initializing constructors? Copy assignment? Move assignment?
If you didn't your code is wrong but your tests are still green, if you did all you are testing now is that the copy constructor for the subtypes produces something operator == thinks is the same
Now, if your constructor is complex and more than just an initializer list sure, go ahead
Assertions are by definition assumptions that your code relies upon, if they fail you either would have crashed anyway or you produce wrong results, which might be worse
This is the attitude of most kernels - you see anything wrong you panic. Call a function at the wrong priority level, bluescreen. Return an invalid error code, bluescreen. Do anything with a bad handle, bluescreen. Cause a page fault, bluescreen
Depends. Is your assertion in a place where you WOULD crash? (Unhandled nil, etc) Is it in a place where something would go "wrong" but remain recoverable for your user? (Incorrect data, weird UI, etc.) If you'd crash anyway in that spot, fine, whatever. If you'd only otherwise have a degraded user experience, don't make it worse by going "bang".
I mean how pissed would you be if Chrome crashed every time there was a page rendering error, or iTunes/Spotify/YouTube crashed when they hit buffering for too long? (Er. Mac iTunes only, I have heard stories about the Windows version.)
Would you use a database if every so often it overwrote the wrong row because yolo, it's just a small data error, or would you rather an assert fire? Some code can tolerate errors, some cannot
Chrome often does crash, but they recover from that, restarting the renderer and reloading the page
The guy does have some wacky rationales, but I do think he arrives at some useful generalizations:
- assertions (at least, smart ones that don't just blindly crash) are often under-utilized
- unit tests are often over-utilized, and can sometimes provide a false sense of security
- integration tests typically provide the most value, because context is essential
- tests can both improve and harm maintainability, and must be assessed as such
- designing good tests may well require more care than designing the original code
- skepticism is a key ingredient of good testing practices, whether that's skepticism of the system under test, or of your own ability to test it, or of testing dogma in general
Those are the statements I'd stand behind. "Unit tests are unlikely to test more than one trillionth of the functionality of any given method in a reasonable testing cycle." is indeed ridiculous, and seems to totally ignore the concept of equivalence classes.
The majority of the tests I write would probably count as integration tests but I do a fair amount of both.
For me I treat them the same. I write unit tests to ensure the method works HOW I expect, and I simultaneously write small integration tests to make sure it works WHERE I expect
So I spend a week or two getting rid of all the warnings in my game within Visual Studio...
Then, when I port my code to Linux, I spend another two days removing the warnings in GCC.
The guy does have some wacky rationales, but I do think he arrives at some useful generalizations:
- assertions (at least, smart ones that don't just blindly crash) are often under-utilized
- unit tests are often over-utilized, and can sometimes provide a false sense of security
- integration tests typically provide the most value, because context is essential
- tests can both improve and harm maintainability, and must be assessed as such
- designing good tests may well require more care than designing the original code
- skepticism is a key ingredient of good testing practices, whether that's skepticism of the system under test, or of your own ability to test it, or of testing dogma in general
Those are the statements I'd stand behind. "Unit tests are unlikely to test more than one trillionth of the functionality of any given method in a reasonable testing cycle." is indeed ridiculous, and seems to totally ignore the concept of equivalence classes.
Well, now that's a list I can totally get behind. That "trillionth" extract, though... Crazy pants.
Alright, super successful launch this weekend. No major issues, everything worked, everything is performing well. Hoping I get a chance to do an official write up for the company to put on the blog about the architecture and tooling used - Docker, ECS Fargate, Terraform, Django + Django-Channels, graphql, websockets. Lots of fun stuff.
Link going into the signature. Of course that's just a retail company website, so all the stuff most of us find exciting is hidden.
We want to use our alerts as a KPI for the backend team, and this is what I came up with. Rather than counting just the number of alerts, this is an aggregated value of both the number of alerts, and their duration. So while an alert is active, the value will keep ticking up. The value itself is pretty meaningless, we just want to track how it moves to see how we're doing.
We're reporting this weekly, so it's the total value over the last seven days. The high-prio line drops sharply because that's where it was seven days since we had that one high prio alert.
Medium-level alerts are 90% services screaming about RAM. We should really get around to fix that so we can bring our Magic Number down.
The guy does have some wacky rationales, but I do think he arrives at some useful generalizations:
- assertions (at least, smart ones that don't just blindly crash) are often under-utilized
- unit tests are often over-utilized, and can sometimes provide a false sense of security
- integration tests typically provide the most value, because context is essential
- tests can both improve and harm maintainability, and must be assessed as such
- designing good tests may well require more care than designing the original code
- skepticism is a key ingredient of good testing practices, whether that's skepticism of the system under test, or of your own ability to test it, or of testing dogma in general
Those are the statements I'd stand behind. "Unit tests are unlikely to test more than one trillionth of the functionality of any given method in a reasonable testing cycle." is indeed ridiculous, and seems to totally ignore the concept of equivalence classes.
this lines up with my experience, and pretty much every developer pundit I've read or talked to who I respect has independently come to similar conclusions
in these cases, what knowledge is there other than anecdotal, and mine is that TDD guys seem to be obsessed with protocol and the mandate of their methodology above anything, including
- Deadlines
- Budgets
- The tastes and preferences of the developers around them, including supervisors
- Years worth of standards and conventions that they inherited upon being hired
- What the software they are working on is even supposed to do, seriously
Testing has become a religion, and like all religions, someone needs to eventually come along and find a way to adapt the fundamentals of testing to practical work. Converting Old Testament to New.
Unfortunately, these movements seem to die out or become obsolete before this ever gets to happen. *shrug emoji*
For my money, the issue with TDD is
T ->D<- D
that D right there. Some people take this extremely literally, and I have met real serious people who follow TDD who believe it's completely appropriate for them to spend months writing coverage tests without ever looking around them to see what really needs to get done, and in their eyes, this work is either equal to or explicitly more important than advancing feature set of their product, addressing a ticket, or doing anything other than making more little green check marks show up on the health page.
Yeah I am always skeptical of dogma, because dogma by definition are things you do instead of thinking critically about them.
There is some dogma that I follow. When I do TDD, I USUALLY write tests before code. Almost always, in fact. That's because when you don't, the tests tend to be worse- they are worse at covering their intended cases, worse at telling you you've done your job correctly, and they take extra work to fail the first time (which is really important for verifying your test will actually tell you when what you're doing is correct or incorrect). I don't always do this because sometimes there are good reasons not to, but for the vast majority of cases I do.
But like any methodology or paradigm, TDD must above all be practical. It's there to accomplish a task, and it's useless if it fails to do so, because the developer in question takes thirty years to do it, or gets too frustrated to continue, or can't properly integrate their code with code someone else wrote, or any number of other things.
I love using TDD because of the way it allows me to structure how I think and execute, and because as someone who is not the most detail oriented person, I can often forget or miss small details that unit tests and integration tests catch for me on a regular basis. In that way, it both plays to my strengths and mitigates one of my biggest weaknesses. Even so, though, I would react very poorly to someone who ritualized it, ESPECIALLY if they tried to push it onto me.
I'm glad it works for you and I fully believe it as I have met other people with similar experience.
As far as I am concerned:
What I have said is, life is short and there are only a finite number of hours in a day. So, we have to make choices about how we spend our time. If we spend it writing tests, that is time we are not spending doing something else. Each of us needs to assess how best to spend our time in order to maximize our results, both in quantity and quality. If people think that spending fifty percent of their time writing tests maximizes their results—okay for them. I'm sure that's not true for me—I'd rather spend that time thinking about my problem. I'm certain that, for me, this produces better solutions, with fewer defects, than any other use of my time. A bad design with a complete test suite is still a bad design.
I still end up having tests, of course, just most certainly not doing TDD.
We want to use our alerts as a KPI for the backend team, and this is what I came up with. Rather than counting just the number of alerts, this is an aggregated value of both the number of alerts, and their duration. So while an alert is active, the value will keep ticking up. The value itself is pretty meaningless, we just want to track how it moves to see how we're doing.
We're reporting this weekly, so it's the total value over the last seven days. The high-prio line drops sharply because that's where it was seven days since we had that one high prio alert.
Medium-level alerts are 90% services screaming about RAM. We should really get around to fix that so we can bring our Magic Number down.
I'm sorry, how can an alert count possibly be a KPI? What does an alert translate to?
We want to use our alerts as a KPI for the backend team, and this is what I came up with. Rather than counting just the number of alerts, this is an aggregated value of both the number of alerts, and their duration. So while an alert is active, the value will keep ticking up. The value itself is pretty meaningless, we just want to track how it moves to see how we're doing.
We're reporting this weekly, so it's the total value over the last seven days. The high-prio line drops sharply because that's where it was seven days since we had that one high prio alert.
Medium-level alerts are 90% services screaming about RAM. We should really get around to fix that so we can bring our Magic Number down.
Eh, I don't like the line not dropping when the alert resolves. I'd switch to a rolling avg (alerts/days) and set them to separate scales or do a log scale. Your dual y axes are hiding important info as you must look at the footnote to figure out which applies to what line.
Anyone writing tests to get to 100% code coverage is doing it wrong on many levels.
1) code coverage is seperate from code path coverage, in any non trivial program code path coverage is intractably large.
2) you should be testing as a black box for unit tests
3) A very large percentage of bugs (don't have the research to hand but it is something like 40% I think) is caused by missing logic paths so having 100% code coverage (heck even 100% code path coverage) misses bugs.
Posts
Worst case I go "oops, forgot to inject the Foo repository."
I won't worry about it then, thanks.
Are the services yours, or are they external?
If this class interfaces with external libraries, you either
(1) don't test the functionality which explicitly deals with external libraries or
(2) interface out an adapter for those external libraries, test calls to the adapter with a MockAdapter, and don't unit test the adapter
You don't unit test the entry point into the program because the entry point should be paper thin. Here's (as far as I understand it) how I'd structure it.
First of all, the asserts wouldn't ever be there because you don't test a mock object usually, which is basically what that test is doing, if you were doing actual testing that bootstrapper would be a real one not a mock one.
However, that mock will let you make a bootstrapper that will either call through to its child services or not. You can also add methods to it to fully return private members which will not be available to the real bootstrapper if you need: I've made them public for convenience in the example.
If each of your services is different and interfacing them wouldn't make sense, then mock out each of your services, or allow them to be null.
Also, a big question: Why do you need a mock bootstrapper? Is it to set up your test environment? If so I get that, but most of your unit tests shouldn't need to interact with the bootstrapper, since you should just be calling into your various objects directly most of the time, unless your architecture mandates doing that indirectly (as it actually does with my game, where much of the code lives in systems which are all called in order by a black box object, meaning I have to set up a situation which will trigger the system, then fire them off, then test to make sure it's run)
I don't really need a mock bootstrapper. I'm just trying to get to 100% code coverage because it's a fun exercise, but I'm willing to accept that the class that registers my services with the .NET service provider doesn't need to be tested.
Edit: I can give you a link to the github if you're super curious, but I'm sure you can find it without too much trouble :P
What I want is a reverse debugger which tracks all state as a CI system running over, I guess a KVM instance? And then just let me write the build script interactively till I get it right.
Then also add a "tarnish" system for secrets so the second I log off everything resets.
after a few weeks of fiddling and dusty tomes pertaining to Docker + Django I finally have a plausible configuration for my server that runs in a production mode
I was really worried that Docker would somehow incur a huge performance penalty for me but so far, the performance seems very acceptable, even in line with my expectations
this is a huge relief.. i feel like i am living in.................... the future
hmm, this seems like a fun/interesting method to try on a little project and see how it goes
You are using python, mate. The performance penalty is baked in.
I've got a bunch of Django stuff running in docker containers, it works great. My project which goes live this weekend (I'll pop it into my signature once it's live) is a next.js/react front end with a Django + django channels + graphene server powering the front end and iOS apps all running on AWS ECS.
I can tweak the header names we search for in the code base but the end goal is to let a non-programmer upload the CSV. what are my options? My first thought was to give the user a form to input what header name corresponds to 'address 1' or 'opening hours' or whatever.
My wife did this exact thing in Ruby at work.
She took the approach of uploading the csv on the first page, then the second page displays drop down selections for each attribute to fill out.
It's worked pretty well, given all the different data sources.
Yeah, this. Whenever I've dealt with CSV imports the import is always a two-step thing. During step one you gather the headers and a random selection of cells from that column and then in step two you display a table with the example data and have a dropdown containing all the header fields above so the user can then manually map them.
It's trivial to do a best-guess process where it'll try and match "address1" to "Address 1/Address-1/Address_1", etc, but for the most part you just want them to define the mapping and have a final say of "yes, this is fine"
Then when there's a problem with the import it's their fault and not the magic black box
I have never hit agree faster than when I saw your last paragraph
So you do only unit testing, and no integration testing?
My 20 years of development have pushed me quite the opposite direction. I find unit testing individual components to be marginally useful, outside of testing basic algorithms and business logic. I find the worst bugs are found, and then tested for, as you start to glue the components together. I've become a huge proponent of edge to edge testing in server stacks.
User can then click on each column header, which opens a drop down menu and allows them to manually assign that column to an attribute
Do I have that right?
Yep. I'd also do some best guess matching to preselect some of the headers for the obvious stuff
From the pdf:
I do not have words to describe "It's okay, we'll just crash the application when we run into a bug."
Assertions are by definition assumptions that your code relies upon, if they fail you either would have crashed anyway or you produce wrong results, which might be worse
This is the attitude of most kernels - you see anything wrong you panic. Call a function at the wrong priority level, bluescreen. Return an invalid error code, bluescreen. Do anything with a bad handle, bluescreen. Cause a page fault, bluescreen
Depends. Is your assertion in a place where you WOULD crash? (Unhandled nil, etc) Is it in a place where something would go "wrong" but remain recoverable for your user? (Incorrect data, weird UI, etc.) If you'd crash anyway in that spot, fine, whatever. If you'd only otherwise have a degraded user experience, don't make it worse by going "bang".
I mean how pissed would you be if Chrome crashed every time there was a page rendering error, or iTunes/Spotify/YouTube crashed when they hit buffering for too long? (Er. Mac iTunes only, I have heard stories about the Windows version.)
This statement is nonsense. Is it a joke? I have to assume it is a joke.
Like the only argument I see is that WE AREN'T GETTING ANYTHING DONE ANYMORE! Uhh, yes we are?
Take the copy constructor test example. Did you also remember to update the move constructor? All initializing constructors? Copy assignment? Move assignment?
If you didn't your code is wrong but your tests are still green, if you did all you are testing now is that the copy constructor for the subtypes produces something operator == thinks is the same
Now, if your constructor is complex and more than just an initializer list sure, go ahead
Would you use a database if every so often it overwrote the wrong row because yolo, it's just a small data error, or would you rather an assert fire? Some code can tolerate errors, some cannot
Chrome often does crash, but they recover from that, restarting the renderer and reloading the page
- assertions (at least, smart ones that don't just blindly crash) are often under-utilized
- unit tests are often over-utilized, and can sometimes provide a false sense of security
- integration tests typically provide the most value, because context is essential
- tests can both improve and harm maintainability, and must be assessed as such
- designing good tests may well require more care than designing the original code
- skepticism is a key ingredient of good testing practices, whether that's skepticism of the system under test, or of your own ability to test it, or of testing dogma in general
Those are the statements I'd stand behind. "Unit tests are unlikely to test more than one trillionth of the functionality of any given method in a reasonable testing cycle." is indeed ridiculous, and seems to totally ignore the concept of equivalence classes.
For me I treat them the same. I write unit tests to ensure the method works HOW I expect, and I simultaneously write small integration tests to make sure it works WHERE I expect
Then, when I port my code to Linux, I spend another two days removing the warnings in GCC.
I'm porting my game to Android.
It uses Clang.
Goddammit!
Well, now that's a list I can totally get behind. That "trillionth" extract, though... Crazy pants.
Link going into the signature. Of course that's just a retail company website, so all the stuff most of us find exciting is hidden.
We want to use our alerts as a KPI for the backend team, and this is what I came up with. Rather than counting just the number of alerts, this is an aggregated value of both the number of alerts, and their duration. So while an alert is active, the value will keep ticking up. The value itself is pretty meaningless, we just want to track how it moves to see how we're doing.
We're reporting this weekly, so it's the total value over the last seven days. The high-prio line drops sharply because that's where it was seven days since we had that one high prio alert.
Medium-level alerts are 90% services screaming about RAM. We should really get around to fix that so we can bring our Magic Number down.
this lines up with my experience, and pretty much every developer pundit I've read or talked to who I respect has independently come to similar conclusions
in these cases, what knowledge is there other than anecdotal, and mine is that TDD guys seem to be obsessed with protocol and the mandate of their methodology above anything, including
- Deadlines
- Budgets
- The tastes and preferences of the developers around them, including supervisors
- Years worth of standards and conventions that they inherited upon being hired
- What the software they are working on is even supposed to do, seriously
Testing has become a religion, and like all religions, someone needs to eventually come along and find a way to adapt the fundamentals of testing to practical work. Converting Old Testament to New.
Unfortunately, these movements seem to die out or become obsolete before this ever gets to happen. *shrug emoji*
For my money, the issue with TDD is
T ->D<- D
that D right there. Some people take this extremely literally, and I have met real serious people who follow TDD who believe it's completely appropriate for them to spend months writing coverage tests without ever looking around them to see what really needs to get done, and in their eyes, this work is either equal to or explicitly more important than advancing feature set of their product, addressing a ticket, or doing anything other than making more little green check marks show up on the health page.
these people and I do not get along
There is some dogma that I follow. When I do TDD, I USUALLY write tests before code. Almost always, in fact. That's because when you don't, the tests tend to be worse- they are worse at covering their intended cases, worse at telling you you've done your job correctly, and they take extra work to fail the first time (which is really important for verifying your test will actually tell you when what you're doing is correct or incorrect). I don't always do this because sometimes there are good reasons not to, but for the vast majority of cases I do.
But like any methodology or paradigm, TDD must above all be practical. It's there to accomplish a task, and it's useless if it fails to do so, because the developer in question takes thirty years to do it, or gets too frustrated to continue, or can't properly integrate their code with code someone else wrote, or any number of other things.
I love using TDD because of the way it allows me to structure how I think and execute, and because as someone who is not the most detail oriented person, I can often forget or miss small details that unit tests and integration tests catch for me on a regular basis. In that way, it both plays to my strengths and mitigates one of my biggest weaknesses. Even so, though, I would react very poorly to someone who ritualized it, ESPECIALLY if they tried to push it onto me.
As far as I am concerned:
I still end up having tests, of course, just most certainly not doing TDD.
I'm sorry, how can an alert count possibly be a KPI? What does an alert translate to?
Eh, I don't like the line not dropping when the alert resolves. I'd switch to a rolling avg (alerts/days) and set them to separate scales or do a log scale. Your dual y axes are hiding important info as you must look at the footnote to figure out which applies to what line.
1) code coverage is seperate from code path coverage, in any non trivial program code path coverage is intractably large.
2) you should be testing as a black box for unit tests
3) A very large percentage of bugs (don't have the research to hand but it is something like 40% I think) is caused by missing logic paths so having 100% code coverage (heck even 100% code path coverage) misses bugs.
I made a game, it has penguins in it. It's pay what you like on Gumroad.
Currently Ebaying Nothing at all but I might do in the future.