It's a limitation of Django's many to many relationships and the data binding/db structure behind the scenes combined with changing things he didn't understand and then not doing the tests I said to afterwards because I knew these issues would come up. With M2M relationships Django creates a default table with FK's to the tables to the two models being related. If you need extra data on that intermediary table, which we do, you can specify an intermediary table to use but a known, documented caveat is that a few of the orm functions stop working on the relationship when you do that. When he moved stuff from the similar classes up to the new superclass that included an attribute defining an M2M relationship. This would have broken the M2M relationship right there. His solution, it seems, was to remove the custom intermediary table... so now some parts of the code are storing/accessing data in the custom intermediate table and some were now accessing the default table. So it all totally broke.
That in itself isn't that big of a deal. He doesn't have much experience with Django and this was a good way for him to learn some of the intricacies of Django's orm. But that I said this was going to break and to watch out for that situation, attribute, and methods that touch that attribute specifically, provided links to Django's documentation on how those things work, provided links to documentation on how our implementation behaves and uses those things, provided tests to do after making changes to make sure it all still worked, and offered a couple different suggestions on how to fix it if it broke in the way I expected and then all of that was apparently ignored, including testing, annoys the hell out of me. Then proceeding to "fix" it while still obviously not understanding what was broken or why it was broken after I said "This is broken, let's wait to fix it after we discuss it and plan the fix because there are a couple options with different tradeoffs" is doubly frustrating.
Fortunately it turns out the "strange idea" he implemented was basically what I told him he'd probably have to do in the first place.
been with the company for 8 years and the owners have been checked out for the last few... I can't deal with it anymore. I have essentially run the 'sister' company for the past 3-4 years.
Hey that looks pretty swank on your resume at the least.
I really want to start a "peer recommendation" website where your peers can weigh in and give you praise or list off what your responsibilities were so you can use it in case of hostile feelings when leaving a company, especially in situations like that.
not a doctor, not a lawyer, examples I use may not be fully researched so don't take out of context plz, don't @ me
So it seems like when you just DL it raw, it's just all the stuff it does slammed in to single files. All the examples use it in a different way though, which makes it slightly annoying :P I guess I could DL it using customize and get the individual components.
I chose not to go this route because the gem is a couple of versions behind (both this and jquery-rails are). I'd rather just manually feed them in to the asset pipeline (which I did).
So it seems like when you just DL it raw, it's just all the stuff it does slammed in to single files. All the examples use it in a different way though, which makes it slightly annoying :P I guess I could DL it using customize and get the individual components.
I chose not to go this route because the gem is a couple of versions behind (both this and jquery-rails are). I'd rather just manually feed them in to the asset pipeline (which I did).
Fair enough!
I want to get in on the PADev.net blog train. I've got some articles in mind about ATDD/TDD with Ruby.
Diablo 3 - DVG#1857
0
Options
DVGNo. 1 Honor StudentNether Institute, Evil AcademyRegistered Userregular
Thanks for the tip on Sublime Text 2, btw. I've been using E, which isn't as nice, and this dulls the pain being apart from TextMate when using my work PC.
Diablo 3 - DVG#1857
0
Options
GnomeTankWhat the what?Portland, OregonRegistered Userregular
Thanks for the tip on Sublime Text 2, btw. I've been using E, which isn't as nice, and this dulls the pain being apart from TextMate when using my work PC.
Yeah, Sublime is amazing. It uses TextMate color themes too, so if you're like me and like a particular color theme that TM has, Sublime can use it. (In my case, it's Grandson of Obsidian, which was forked from Son of Obsidian for Visual Studio, which was forked from Obsidian for Eclipse).
So it seems like when you just DL it raw, it's just all the stuff it does slammed in to single files. All the examples use it in a different way though, which makes it slightly annoying :P I guess I could DL it using customize and get the individual components.
I chose not to go this route because the gem is a couple of versions behind (both this and jquery-rails are). I'd rather just manually feed them in to the asset pipeline (which I did).
Fair enough!
I want to get in on the PADev.net blog train. I've got some articles in mind about ATDD/TDD with Ruby.
TDD is my biggest short coming with rails. Actually, unit tests period, are just a short coming for me. I tend to find them so incredibly cumbersome and tedious to maintain. I start out with great intentions, but once most of my projects reach that middle level of maturity, I've just given up.
I don't know how to TDD. Part of it is psychological on my part (I need to write a whole bunch of extra code that won't do anything in the actual project? fuck that noise), and part of it is not knowing how to write tests that actually have meaning.
All the examples I've seen have the whole 'red-green-refactor' schtick and apply it to things that are fairly simple to get working anyway. Not exactly enticing me to change my workflow. But, since it's a 'thing', I should probably learn how to do it.
GnomeTankWhat the what?Portland, OregonRegistered Userregular
Well, in real TDD, you're supposed to write the tests first. So the tests are meaningful because you say they are, and because they represent a meta-framework for your functionality.
I have several issues with this. First it assumes you know enough about your requirements to write good tests, which is rarely the case in a lot of environments. Bad requirements are just a fact of life in development. Second, it assumes you understand exactly how your going to design the code unit that's going to be tested, because you have to design tests that harness to that design.
Well, in real TDD, you're supposed to write the tests first. So the tests are meaningful because you say they are, and because they represent a meta-framework for your functionality.
I have several issues with this. First it assumes you know enough about your requirements to write good tests, which is rarely the case in a lot of environments. Bad requirements are just a fact of life in development. Second, it assumes you understand exactly how your going to design the code unit that's going to be tested, because you have to design tests that harness to that design.
Exactly.
I'm much more of a small prototype guy. I need a feature? I make a small standalone version and test it. That allows me to play with it beyond staccato pass/fail conditions. Once it works, I incorporate it into the rest of the project and test some more. Rinse and repeat, refactoring where necessary.
Then again, I'm also someone who thinks the terminology associated with scrum is painfully lame, and that a lot of it comes across like some bored middle management person's wet dream. I think I have some sort of hangup about codifying the development process in general, outside of using source control. A lot of it seems either obvious or unnecessary, and largely a waste of time. But, I've never had to work in a team before, so I'm definitely not getting a full picture (yay working for myself/freelancing!).
GnomeTankWhat the what?Portland, OregonRegistered Userregular
Codifying development processes is rarely done for the benefit of skilled developers. It's done for the benefit of average and below average developers, and the very middle-management you're talking about.
That has been my problem with attempting TDD. I rarely actually know enough about how it needs to work to write a unit test before I've written any real code. My second issue with it is that the unit test is code as well. I can write my real code under the assumption that the unit test is right, but how do I know that unit test is right in the first place? In either case I'm writing some sort of untested code first and then basing other code on that untested code. I'd rather write my real code which the user/product owner/etc can verify is working first. Then write my first unit tests. Then after that my unit tests work very well as regression test to make sure future changes don't screw it all up.
The only serious downside I have with this is that once you've got working code, sometimes it's hard to convince the business/product owner that it's still really important to write the unit tests rather than moving on to the next bit of functionality.
GnomeTankWhat the what?Portland, OregonRegistered Userregular
Well, unit testing frameworks should be simple enough, and your tests should be simple enough, that basic logical validation by the developer will posit them to be correct.
Sounds like a postmortem or something that was done on a "real" project, since TDD examples that people have been over seem to explain the concept but are too trite to really get how they have benefit.
Sounds like a postmortem or something that was done on a "real" project, since TDD examples that people have been over seem to explain the concept but are too trite to really get how they have benefit.
So we have a templated unit test framework we developed ad-hoc while writing some heavily templated code, and it is pretty amazing. We can write up a series of templated unit tests, and then dispatch a collection of objects to test with them by changing the tag we use when running the test.
HalibutPassion FishSwimming in obscurity.Registered Userregular
edited May 2012
Unit tests do 2 things. They allow you to verify that a small piece of logic actually does what you intend it to do. And they provide a safety net so that if you need to do some refactoring, you can verify that the changes result in a program that still executes the way it did before. Changing how something works (because requirements changed/solidified/whatever), necessitates a change to the test. At that point you can make a judgment call to modify your existing test, or to blow it away and write new tests that more closely fit your requirements. A lot of times, I basically treat unit tests the same way I treat regular expressions. If I can't quickly change it to match the new requirement, I just re-write it.
The most important part of TDD for me is that it forces me to focus on the actual requirements, rather than the implementation. If you don't know the requirements (or can't at least guess at them), then how are you supposed to know what the system should do? Even if the requirements are vague, or haven't been fully fleshed out, you still want the system to behave in a well-defined way, otherwise, what the hell are you even doing? Requirements change a lot, but it's almost never a big deal to change the tests, unless your code is so highly coupled that a single change breaks a significant portion of your tests.
The red-refactor-until-green loop is pretty cool, but the examples out there take this way overboard for what you will do most days. I've never attempted to run a unit test if I have a compilation error. I never run a unit test on a piece of code that I know isn't implemented yet (returns null or throws some kind of NotYetImplementedException). Just write your tests, write your code, and start the red-red-green loop from that point.
Also, find a good mocking framework. It's virtually required for testing anything remotely complicated. I've used a bunch of frameworks for Java, but Mockito is my favorite. I've used Moq for C# which was also very good (lambdas make setting up mocks very easy).
If you find your code has a bunch of conditionals which makes it hard to test, then you're probably testing too much. Break that method into smaller pieces. This sometimes forces you to break encapsulation, but in my opinion the trade-off is almost always worth it.
Halibut on
0
Options
GnomeTankWhat the what?Portland, OregonRegistered Userregular
edited May 2012
At the end of the day, TDD is like anything else. If it's a useful tool for you, great. If you find it burdensome and it just gets in your way, find something more comfortable for you. The end goal is quality software that meets user needs, not to strictly adhere to some pure methodology. I always get my feathers a little ruffled when people preach one method or language as gospel. In almost every case, I can find a different method, pattern or language that I can complete the equivalent task in, meeting the same quality and elegance standards.
(Not saying you were doing that Halibut, I'm just soap boxing it up)
HalibutPassion FishSwimming in obscurity.Registered Userregular
No need to choose TDD (or regular unit testing if you don't want to write the tests first) as the only option for verifying your software. I've found I don't need as much testing if I choose a design or pattern that limits the chance of errors by introducing some constraint in the system. You can do lots of stuff with types (in some languages) and let the compiler do the work. There are a lot of tools out there.
I will say though, that you should come up with a solution for verifying that your code does what you intend it to do. No one is perfect, and you can't expect (or be expected) to perfectly translate a requirement (or idea, or whatever) into code the first time out. Especially on bigger projects.
0
Options
GnomeTankWhat the what?Portland, OregonRegistered Userregular
I'm quasi old school. I actually just test my code. And I do a ton of debugger verifying. it's rare for me to write a complex piece of code I don't step through with the debugger thoroughly to watch data flow. Back in the stone age of 1999 when I got in to the industry, this is how we tested code *shakes his cane*.
Yeah I do use tests, none of this fancy automated testing for me. Most of my stuff is user based anyways so I guess it makes sense. Back when I started learning c++ I just threw couts all over the place to verify I was reaching places and my values were what I wanted.
bowen on
not a doctor, not a lawyer, examples I use may not be fully researched so don't take out of context plz, don't @ me
I find automated/semi-automated (ie. run django's tests before committing new changes, etc) tests most useful as regression tests to verify that new code doesn't break things that were previously working. As a project grows it gets more and more difficult to tell if a change you made is going to affect something you don't expect. That's especially true if you inherit someone else's code or, as in the case of my other dev, are working with new frameworks and api's that you just don't know all the ins and outs of very well. You can run the tests and if one of them suddenly doesn't pass you know something isn't going to work and you need to figure out what it is, if it matters, and what needs to change to allow both it and your new code to do what they need to do.
Yeah I do use tests, none of this fancy automated testing for me. Most of my stuff is user based anyways so I guess it makes sense. Back when I started learning c++ I just threw couts all over the place to verify I was reaching places and my values were what I wanted.
I guess the arguement against that is this. Manual testing is slow and monotonous, and you're not always going to remember to go back and retest everything every time you make a change, even if you know exactly what the change impacts. Take for instance the project I'm doing right now, a weekly workout scheduler for a small gym. Right now, I have about 139 tests, which include Model and Controller Unit Tests and Integration tests that drive the User Flows. It takes around 27 seconds to test all those things (which is faster than I could manually end-to-end test one flow), and it takes less time to add a new test than it does to set up debugging or other methods of manually looking at the objects. I've got a gem (guard-rspec) that runs the relevant tests every time I make a change (so it runs the integration tests when I change the view, the model tests when I change the model, etc) and sends me an alert of the results, so I don't even need to manually kick them off or pay that much attention to them. If more than one test breaks when I do something, I've got a good idea that I did the wrong thing.
Posts
That in itself isn't that big of a deal. He doesn't have much experience with Django and this was a good way for him to learn some of the intricacies of Django's orm. But that I said this was going to break and to watch out for that situation, attribute, and methods that touch that attribute specifically, provided links to Django's documentation on how those things work, provided links to documentation on how our implementation behaves and uses those things, provided tests to do after making changes to make sure it all still worked, and offered a couple different suggestions on how to fix it if it broke in the way I expected and then all of that was apparently ignored, including testing, annoys the hell out of me. Then proceeding to "fix" it while still obviously not understanding what was broken or why it was broken after I said "This is broken, let's wait to fix it after we discuss it and plan the fix because there are a couple options with different tradeoffs" is doubly frustrating.
Fortunately it turns out the "strange idea" he implemented was basically what I told him he'd probably have to do in the first place.
I really want to start a "peer recommendation" website where your peers can weigh in and give you praise or list off what your responsibilities were so you can use it in case of hostile feelings when leaving a company, especially in situations like that.
I chose not to go this route because the gem is a couple of versions behind (both this and jquery-rails are). I'd rather just manually feed them in to the asset pipeline (which I did).
Fair enough!
I want to get in on the PADev.net blog train. I've got some articles in mind about ATDD/TDD with Ruby.
Yeah, Sublime is amazing. It uses TextMate color themes too, so if you're like me and like a particular color theme that TM has, Sublime can use it. (In my case, it's Grandson of Obsidian, which was forked from Son of Obsidian for Visual Studio, which was forked from Obsidian for Eclipse).
TDD is my biggest short coming with rails. Actually, unit tests period, are just a short coming for me. I tend to find them so incredibly cumbersome and tedious to maintain. I start out with great intentions, but once most of my projects reach that middle level of maturity, I've just given up.
All the examples I've seen have the whole 'red-green-refactor' schtick and apply it to things that are fairly simple to get working anyway. Not exactly enticing me to change my workflow. But, since it's a 'thing', I should probably learn how to do it.
I have several issues with this. First it assumes you know enough about your requirements to write good tests, which is rarely the case in a lot of environments. Bad requirements are just a fact of life in development. Second, it assumes you understand exactly how your going to design the code unit that's going to be tested, because you have to design tests that harness to that design.
Exactly.
I'm much more of a small prototype guy. I need a feature? I make a small standalone version and test it. That allows me to play with it beyond staccato pass/fail conditions. Once it works, I incorporate it into the rest of the project and test some more. Rinse and repeat, refactoring where necessary.
Then again, I'm also someone who thinks the terminology associated with scrum is painfully lame, and that a lot of it comes across like some bored middle management person's wet dream. I think I have some sort of hangup about codifying the development process in general, outside of using source control. A lot of it seems either obvious or unnecessary, and largely a waste of time. But, I've never had to work in a team before, so I'm definitely not getting a full picture (yay working for myself/freelancing!).
The only serious downside I have with this is that once you've got working code, sometimes it's hard to convince the business/product owner that it's still really important to write the unit tests rather than moving on to the next bit of functionality.
Foxpro?
I'm currently writing code (in pickBASIC) to generate PDFs. So there is my special hell for the day
Nintendo ID: Incindium
PSN: IncindiumX
So we have a templated unit test framework we developed ad-hoc while writing some heavily templated code, and it is pretty amazing. We can write up a series of templated unit tests, and then dispatch a collection of objects to test with them by changing the tag we use when running the test.
The benefit is that when we add a numeric class in this example we can see exactly when it starts failing certain tests fairly automatically.
Real world examples that are more interesting than pagination or a shopping cart.
A discussion on how developers actually approach writing their tests (especially when writing the initial tests for a project/module).
Hnnnggg I don't wanna be trained in VB.NET, I wanna learn C#.
I love how Vanilla decided to include a in your code. Fitting.
Right about the time in typing that line my face does it too
e: Updated the code, added a regex custom validator and some messages to the remote calls.
The most important part of TDD for me is that it forces me to focus on the actual requirements, rather than the implementation. If you don't know the requirements (or can't at least guess at them), then how are you supposed to know what the system should do? Even if the requirements are vague, or haven't been fully fleshed out, you still want the system to behave in a well-defined way, otherwise, what the hell are you even doing? Requirements change a lot, but it's almost never a big deal to change the tests, unless your code is so highly coupled that a single change breaks a significant portion of your tests.
The red-refactor-until-green loop is pretty cool, but the examples out there take this way overboard for what you will do most days. I've never attempted to run a unit test if I have a compilation error. I never run a unit test on a piece of code that I know isn't implemented yet (returns null or throws some kind of NotYetImplementedException). Just write your tests, write your code, and start the red-red-green loop from that point.
Also, find a good mocking framework. It's virtually required for testing anything remotely complicated. I've used a bunch of frameworks for Java, but Mockito is my favorite. I've used Moq for C# which was also very good (lambdas make setting up mocks very easy).
If you find your code has a bunch of conditionals which makes it hard to test, then you're probably testing too much. Break that method into smaller pieces. This sometimes forces you to break encapsulation, but in my opinion the trade-off is almost always worth it.
(Not saying you were doing that Halibut, I'm just soap boxing it up)
I will say though, that you should come up with a solution for verifying that your code does what you intend it to do. No one is perfect, and you can't expect (or be expected) to perfectly translate a requirement (or idea, or whatever) into code the first time out. Especially on bigger projects.
I guess the arguement against that is this. Manual testing is slow and monotonous, and you're not always going to remember to go back and retest everything every time you make a change, even if you know exactly what the change impacts. Take for instance the project I'm doing right now, a weekly workout scheduler for a small gym. Right now, I have about 139 tests, which include Model and Controller Unit Tests and Integration tests that drive the User Flows. It takes around 27 seconds to test all those things (which is faster than I could manually end-to-end test one flow), and it takes less time to add a new test than it does to set up debugging or other methods of manually looking at the objects. I've got a gem (guard-rspec) that runs the relevant tests every time I make a change (so it runs the integration tests when I change the view, the model tests when I change the model, etc) and sends me an alert of the results, so I don't even need to manually kick them off or pay that much attention to them. If more than one test breaks when I do something, I've got a good idea that I did the wrong thing.