Hey testing center, thanks for that super informative email you sent before I travelled all the way to your testing site only to find it closed due to snow.
Who am I kidding, you guys know there was no email.
I'd make a separate controller for the barcode stuff, make _that_ controller track if it's loaded the library / set the hooks / whatever so it only happens once, and have each controller that needs the barcode stuff call
Alternative 2: have itemScan be the only one that loads things, and have itemDetails pass something to itemScan to say "actually, if you get a scan, don't handle it yourself, pass it on to me instead".
The problem is that the scan action is completely different per application. So in app1 when I scan a barcode it will need to check the item type and if it's not SomeThing then it needs to throw an error. If it is SomeThing then it will need to call the backend API to transition the workflow.
right, so that's why you'd pass in a different handler for the barcode getting scanned, line 2 of the code above; to complete that example:
If your controllers need to do different things depending on where they're getting used from, then you'll have to tell the controller where it's getting used from, or make multiple controllers.
@djmitchella This worked! Thank you so much! I had to learn how to reinitialize a controller within the setupController method but after that it worked as intended! Thanks!
I think that's just par for the course. I pulled up the first hit from google for a tutorial for drawing a triangle in directx and it really doesn't seem much better.
I think that's just par for the course. I pulled up the first hit from google for a tutorial for drawing a triangle in directx and it really doesn't seem much better.
A lot of that gets abstracted away from you once you've built your core functionality.
You wouldn't keep setting that stuff up each time you wanted to draw a triangle, you'd just pass the vertices to a class and boom new triangle with all that backup code.
OGL is def easier than DX though.
not a doctor, not a lawyer, examples I use may not be fully researched so don't take out of context plz, don't @ me
That said, it's much appreciated, because today Ember is beating me up by somehow managing to create random broken versions of the global Application object if I run it in the super-elderly embedded version of webkit we're stuck with. That was exciting to diagnose -- our code looks like:
initializer: function() {
//... populate 'data' appropriately (it's static so we can do it at initializer time)...
Application.bigBlobOfData = data;
}
...and then, elsewhere...
for (var i=0; i < Application.bigBlobOfData.items.length; i++) // roughly speaking
In Chrome/firefox/etc, great. In Webkit v.getOffMyLawnYouDarnedKids, Application.bigBlobOfData is valid the first time when it's getting written -- but it has mysteriously become null when I try and retrieve it later on.
So, time to change it to window.bigBlobOfData which doesn't get stomped on, and NEVER LOOK AT THAT CODE EVER AGAIN. This sort of thing does not make me optimistic that we'll meet our current schedule, but I guess we'll see.
mightyjongyoSour CrrmEast Bay, CaliforniaRegistered Userregular
okay, so since a bunch of you know database queries like the back of your hands - is there a way to "translate" columns within a sqlite view? so if i have a table with columns like:
name (varchar) | location (varchar)
and another table like:
location (varchar) | id (int)
Would it be possible to create a table view such that I have:
name (varchar) | location (int)
??
From what I can see I probably want an inner join? although that would end up with something like
name (varchar) | location (varchar) | id (int)
...right? or am I completely misunderstanding joins.
We're also celebrating that @DyasAlure has forked and is now a parent process to a second child.
Yaaaaay Dyas!
Hey now, that fork happened in another development environment. I'm not sure how it migrated itself over to here, but thank you. I just finished my calc IV homework (Yes, with everything going on, I still have school) and my wife hasn't called me from the hospital. I'm taking that to mean that nothing things are still improving.
okay, so since a bunch of you know database queries like the back of your hands - is there a way to "translate" columns within a sqlite view? so if i have a table with columns like:
name (varchar) | location (varchar)
and another table like:
location (varchar) | id (int)
Would it be possible to create a table view such that I have:
name (varchar) | location (int)
??
From what I can see I probably want an inner join? although that would end up with something like
name (varchar) | location (varchar) | id (int)
...right? or am I completely misunderstanding joins.
Do the inner join then just select name and id as location?
0
mightyjongyoSour CrrmEast Bay, CaliforniaRegistered Userregular
I accomplished today what would take someone else on my team at least a week to do if I told them how, and probably would never get it figured out at all if I didn't.
BALL IS IN YOUR COURT NOW, CLIENT.
Now we can actually make good on threats of timelines slipping when they don't make time to review the requirements or demos.
Hey everyone, a quick PSA from the guy who has to port all your code to 64-bit:
Stop casting pointers to int types! It was a happy accident that it worked on 32-bit systems, but now I've spent all morning typing "intptr_t" and "#include <inttypes.h>" and it's getting old pretty fast.
I think this might be the first time a thing of mine has made it into the buzzword soup of the thread title, and I'm proud that it might be part of the title the thread goes out on.
We should just drop int entirely and force int32/int64 naming conventions tbh.
Why? 32 bit systems are phasing out and will eventually be relics, and there's no other real reason to specify a 32 vs 64 bit integer.
Enforcing that kind of naming convention is just going to make programmers 20 years from now very angry that they're stuck with this stupid legacy convention from back when they had 32 bit processors.
We should just drop int entirely and force int32/int64 naming conventions tbh.
Why? 32 bit systems are phasing out and will eventually be relics, and there's no other real reason to specify a 32 vs 64 bit integer.
Enforcing that kind of naming convention is just going to make programmers 20 years from now very angry that they're stuck with this stupid legacy convention from back when they had 32 bit processors.
Because platform incongruity is still a thing in 2015.
What does an int mean? Doesn't really mean anything, it's ambiguous. It can mean anywhere from a short to a N-byte monstrosity depending on the system and implementation.
Dropping "int" for int32 or int64 would be good in general because it makes people think about bytes more frequently than they do, and you avoid situations where void* is getting cast to int. "Well I know I need 4 bytes, so int it is!"
Not that it necessarily would have resolved this issue, but there's been a lot of times I've run into code that has issues with bounds because no one understands 'int' isn't shorthand for int32 and it's compiler/platform dependent.
not a doctor, not a lawyer, examples I use may not be fully researched so don't take out of context plz, don't @ me
void* and int are both 4 byte and 8 byte respectively on 32 and 64 bit systems right? Unless the compiler is converting int to int32 for some reason?
Pointers are 8 bytes on 64-bit machines (think about it), 4 bytes on 32-bit machines.
Plain int is 4 bytes on both.
Long int is 4 bytes on 32-bit, but 8 bytes on 64-bit UNIX machines (Windows forces you to use explicit 64-bit types).
Ah I was under the assumption that 64bit compilers had moved int into the realm of 8 bytes now. Looks like this is not the case.
Are they doing it for antiquity still? Obviously recompiling a program with ints thrown around into 64 bit would immediately double its memory footprint eh?
not a doctor, not a lawyer, examples I use may not be fully researched so don't take out of context plz, don't @ me
We should just drop int entirely and force int32/int64 naming conventions tbh.
Why? 32 bit systems are phasing out and will eventually be relics, and there's no other real reason to specify a 32 vs 64 bit integer.
Enforcing that kind of naming convention is just going to make programmers 20 years from now very angry that they're stuck with this stupid legacy convention from back when they had 32 bit processors.
Because platform incongruity is still a thing in 2015.
What does an int mean? Doesn't really mean anything, it's ambiguous. It can mean anywhere from a short to a N-byte monstrosity depending on the system and implementation.
Dropping "int" for int32 or int64 would be good in general because it makes people think about bytes more frequently than they do, and you avoid situations where void* is getting cast to int. "Well I know I need 4 bytes, so int it is!"
Not that it necessarily would have resolved this issue, but there's been a lot of times I've run into code that has issues with bounds because no one understands 'int' isn't shorthand for int32 and it's compiler/platform dependent.
Yes, of course it's still a thing in 2015, that's what we're discussing right now. When you are declaring an int, you should not be thinking "I need 4 bytes," you should be thinking "I need a number up to X in magnitude." There are basically three situations where a programmer needs to count bits in their variables.
1. They have something that might not fit in a smaller variable. Instead of counting them, just use the bigger one. They're cheap. It's fine.
2. They are counting bits because they need a specific size for a data transfer or storage. In this case, you're already thinking about how big it is, and as such if you're doing it right, you'll declare a size-specific variable like int32 anyway.
3. You're using it as a container for a bunch of flags, which is not what an int is for. The solution to this problem isn't to force EVERYONE to ALWAYS use int32, it's to implement a bit field if you super need to not use a struct or other data structure.
Also, in 20 years, people should still be only using an int32 if they only need a certain range of numbers.
This is exactly why it's a bad idea to enforce a convention like you're suggesting.
So you use int even though your variable only needs 2 bytes of data, ie the protocol designed will always only have a max value in that range, and if you need to change it, the entire system is changing.
(I deal with a lot of lower level protocols at the moment where they specify byte lengths)
not a doctor, not a lawyer, examples I use may not be fully researched so don't take out of context plz, don't @ me
Also, in 20 years, people should still be only using an int32 if they only need a certain range of numbers.
This is exactly why it's a bad idea to enforce a convention like you're suggesting.
So you use int even though your variable only needs 2 bytes of data, ie the protocol designed will always only have a max value in that range, and if you need to change it, the entire system is changing.
(I deal with a lot of lower level protocols at the moment where they specify byte lengths)
A byte-length specified protocol is, like, the exact situation where, yes, you want to use size specific variables. But just because you see idiots who assume all ints are a certain size, that doesn't mean people who just want a number should have to explicitly specify exactly how large a memory footprint their number is going to take up.
And yes, I use ints even if my number is guaranteed to be between 1 and 10.
Eh, I'd rather take a minute to think about the kind of footprint the entire system is going to take up.
In my system if I was using ints where shorts were needed, the size could balloon up to 2-4+ gigs (this is bad). It's critically important to think about this stuff, maybe not all the time, but I think it's at least something that one should consider. Even if you only ever have 5 ints in your program.
It seems like it wouldn't really impact your workflow and you'd just have to consider what kind of int you want, and you seem salty about the concept of it? Maybe 15 years from now your program is compiled on a specialized system and works great except for occasionally all the data goes into negatives or wraps back to 0.
not a doctor, not a lawyer, examples I use may not be fully researched so don't take out of context plz, don't @ me
void* and int are both 4 byte and 8 byte respectively on 32 and 64 bit systems right? Unless the compiler is converting int to int32 for some reason?
Pointers are 8 bytes on 64-bit machines (think about it), 4 bytes on 32-bit machines.
Plain int is 4 bytes on both.
Long int is 4 bytes on 32-bit, but 8 bytes on 64-bit UNIX machines (Windows forces you to use explicit 64-bit types).
Ah I was under the assumption that 64bit compilers had moved int into the realm of 8 bytes now. Looks like this is not the case.
Are they doing it for antiquity still? Obviously recompiling a program with ints thrown around into 64 bit would immediately double its memory footprint eh?
According to wiki, it's just the data model that UNIX-likes run on. I assume it's a decision lost in the mists of time.
Eh, I'd rather take a minute to think about the kind of footprint the entire system is going to take up.
In my system if I was using ints where shorts were needed, the size could balloon up to 2-4+ gigs (this is bad). It's critically important to think about this stuff, maybe not all the time, but I think it's at least something that one should consider. Even if you only ever have 5 ints in your program.
It seems like it wouldn't really impact your workflow and you'd just have to consider what kind of int you want, and you seem salty about the concept of it? Maybe 15 years from now your program is compiled on a specialized system and works great except for occasionally all the data goes into negatives or wraps back to 0.
None of this has anything to do with renaming int to int32 or int64. If the size of your variable is important to you, you use a variable of specific size. If it's not, then you eyeball it to make sure it'll fit for all reasonable values. I'm not salty about having to consider the size of variables, I do that all the time, and I actually quite enjoy the random byte-counting I have to do. But, I certainly don't want to be bothered with it when I'm just writing fizzbuzz.
Why? What would it impact? You would implicitly know you need an int32 wouldn't you?
It's extra cognitive load. How many bits I want in this float is irrelevant most of the time, and at that point it's just extra language for no reason.
Posts
Who am I kidding, you guys know there was no email.
@djmitchella This worked! Thank you so much! I had to learn how to reinitialize a controller within the setupController method but after that it worked as intended! Thanks!
Hmm
Just changed thread title.
But need to acknowledge djmitchella's contribution.
I know
Edit: There. Keep up the good work. =P
oh
http://www.directxtutorial.com/Lesson.aspx?lessonid=11-4-5
A lot of that gets abstracted away from you once you've built your core functionality.
You wouldn't keep setting that stuff up each time you wanted to draw a triangle, you'd just pass the vertices to a class and boom new triangle with all that backup code.
OGL is def easier than DX though.
That said, it's much appreciated, because today Ember is beating me up by somehow managing to create random broken versions of the global Application object if I run it in the super-elderly embedded version of webkit we're stuck with. That was exciting to diagnose -- our code looks like: In Chrome/firefox/etc, great. In Webkit v.getOffMyLawnYouDarnedKids, Application.bigBlobOfData is valid the first time when it's getting written -- but it has mysteriously become null when I try and retrieve it later on.
So, time to change it to window.bigBlobOfData which doesn't get stomped on, and NEVER LOOK AT THAT CODE EVER AGAIN. This sort of thing does not make me optimistic that we'll meet our current schedule, but I guess we'll see.
Yaaaaay Dyas!
and another table like:
Would it be possible to create a table view such that I have:
??
From what I can see I probably want an inner join? although that would end up with something like
...right? or am I completely misunderstanding joins.
Hey now, that fork happened in another development environment. I'm not sure how it migrated itself over to here, but thank you. I just finished my calc IV homework (Yes, with everything going on, I still have school) and my wife hasn't called me from the hospital. I'm taking that to mean that nothing things are still improving.
If you don't follow the steam thread, http://forums.penny-arcade.com/discussion/comment/32029659/#Comment_32029659 here is the announcement. I'm sure things are fine, but as a parent to a child object, you worry the fork didn't go well.
Do the inner join then just select name and id as location?
I accomplished today what would take someone else on my team at least a week to do if I told them how, and probably would never get it figured out at all if I didn't.
BALL IS IN YOUR COURT NOW, CLIENT.
Now we can actually make good on threats of timelines slipping when they don't make time to review the requirements or demos.
Stop casting pointers to int types! It was a happy accident that it worked on 32-bit systems, but now I've spent all morning typing "intptr_t" and "#include <inttypes.h>" and it's getting old pretty fast.
void* and int are both 4 byte and 8 byte respectively on 32 and 64 bit systems right? Unless the compiler is converting int to int32 for some reason?
Why? 32 bit systems are phasing out and will eventually be relics, and there's no other real reason to specify a 32 vs 64 bit integer.
Enforcing that kind of naming convention is just going to make programmers 20 years from now very angry that they're stuck with this stupid legacy convention from back when they had 32 bit processors.
Because platform incongruity is still a thing in 2015.
What does an int mean? Doesn't really mean anything, it's ambiguous. It can mean anywhere from a short to a N-byte monstrosity depending on the system and implementation.
Dropping "int" for int32 or int64 would be good in general because it makes people think about bytes more frequently than they do, and you avoid situations where void* is getting cast to int. "Well I know I need 4 bytes, so int it is!"
Not that it necessarily would have resolved this issue, but there's been a lot of times I've run into code that has issues with bounds because no one understands 'int' isn't shorthand for int32 and it's compiler/platform dependent.
You guys just slap ints around even if you only really need a uint16 ?
Pointers are 8 bytes on 64-bit machines (think about it), 4 bytes on 32-bit machines.
Plain int is 4 bytes on both.
Long int is 4 bytes on 32-bit, but 8 bytes on 64-bit UNIX machines (Windows forces you to use explicit 64-bit types).
Ah I was under the assumption that 64bit compilers had moved int into the realm of 8 bytes now. Looks like this is not the case.
Are they doing it for antiquity still? Obviously recompiling a program with ints thrown around into 64 bit would immediately double its memory footprint eh?
Yes, of course it's still a thing in 2015, that's what we're discussing right now. When you are declaring an int, you should not be thinking "I need 4 bytes," you should be thinking "I need a number up to X in magnitude." There are basically three situations where a programmer needs to count bits in their variables.
1. They have something that might not fit in a smaller variable. Instead of counting them, just use the bigger one. They're cheap. It's fine.
2. They are counting bits because they need a specific size for a data transfer or storage. In this case, you're already thinking about how big it is, and as such if you're doing it right, you'll declare a size-specific variable like int32 anyway.
3. You're using it as a container for a bunch of flags, which is not what an int is for. The solution to this problem isn't to force EVERYONE to ALWAYS use int32, it's to implement a bit field if you super need to not use a struct or other data structure.
This is exactly why it's a bad idea to enforce a convention like you're suggesting.
So you use int even though your variable only needs 2 bytes of data, ie the protocol designed will always only have a max value in that range, and if you need to change it, the entire system is changing.
(I deal with a lot of lower level protocols at the moment where they specify byte lengths)
A byte-length specified protocol is, like, the exact situation where, yes, you want to use size specific variables. But just because you see idiots who assume all ints are a certain size, that doesn't mean people who just want a number should have to explicitly specify exactly how large a memory footprint their number is going to take up.
And yes, I use ints even if my number is guaranteed to be between 1 and 10.
Eh, I'd rather take a minute to think about the kind of footprint the entire system is going to take up.
In my system if I was using ints where shorts were needed, the size could balloon up to 2-4+ gigs (this is bad). It's critically important to think about this stuff, maybe not all the time, but I think it's at least something that one should consider. Even if you only ever have 5 ints in your program.
It seems like it wouldn't really impact your workflow and you'd just have to consider what kind of int you want, and you seem salty about the concept of it? Maybe 15 years from now your program is compiled on a specialized system and works great except for occasionally all the data goes into negatives or wraps back to 0.
According to wiki, it's just the data model that UNIX-likes run on. I assume it's a decision lost in the mists of time.
None of this has anything to do with renaming int to int32 or int64. If the size of your variable is important to you, you use a variable of specific size. If it's not, then you eyeball it to make sure it'll fit for all reasonable values. I'm not salty about having to consider the size of variables, I do that all the time, and I actually quite enjoy the random byte-counting I have to do. But, I certainly don't want to be bothered with it when I'm just writing fizzbuzz.
Now I'm just getting stabby.
It's extra cognitive load. How many bits I want in this float is irrelevant most of the time, and at that point it's just extra language for no reason.