Perhaps he means declaring them as global variables?
Seems kind of pointless. Either you're going to have to create and destroy them anyway, or they cost no overhead whatsoever since C++ isn't memory-managed. All adding a local variable to a function does is increase the stack pointer offset for that function.
Globals are the opposite of what I would call temporary.
The post is pretty hard for me to decipher this late.
Clarification would help, but I think that's exactly the point. Instead of allocating and freeing memory for variables when needed just make them global so they're already instantiated and can be used and thus you save some cpu cycles. In my experience though on an embedded system memory is a more valuable resource than the processor and I'd imagine that your memory footprint would go through the roof if you implemented this everywhere. I'd also question how much you'd actually save and there have got to be better optimizations that you should implement first.
Well another thing is that on embedded platforms, malloc is usually limited. I do embedded work and the basic malloc pool we have is I think about 512kB for the entire system. There are other memory pools for buffers, but it's better to declare something statically instead of mallocing it, unless the system can cope with the allocation failing and/or it's only alive for short periods of time.
So. Name based UUIDs. Any idea on how to generate them in C++? I know how to do it in C#/Java with the built in classes but I'd like to do it in C++. I'd also not like to introduce yet another library to my app. I think there's an open source one like openssl or something right? I'm looking for platform independent if possible.
Anyone have any insight or know how to do it? The RFC documents are like Latin to me.
bowen on
not a doctor, not a lawyer, examples I use may not be fully researched so don't take out of context plz, don't @ me
The last time I did embedded programming I don't think we even had a working malloc.
It was really okay though, since usually the stack was good enough if we didn't want to make it permanently stored in memory.
We didn't use C++ though, just regular C.
No reason to use C++ on embedded systems usually, too much overhead for what amounts to code fluff. In my experience anyways. It was always toted that C is the de facto language for embedded devices. Unless Sun paid someone off to get Java on there.
bowen on
not a doctor, not a lawyer, examples I use may not be fully researched so don't take out of context plz, don't @ me
Perhaps he means declaring them as global variables?
Seems kind of pointless. Either you're going to have to create and destroy them anyway, or they cost no overhead whatsoever since C++ isn't memory-managed. All adding a local variable to a function does is increase the stack pointer offset for that function.
if you're using C++ and need a global variable, please consider using the Singleton Pattern. Same thing. Object-Oriented results.
The guy who ran the course told me that he'd gotten the idea from a SE lecturer at another university, who ran a similar course but would grade at random intervals throughout the year, giving students 24 hours notice that their deliverable was due. Apparently this was deemed a little too harsh, though I can see the merit if it encourages students to actually use an agile process.
Heh, thats more real world than you'd think. If the professor would stop by your dorm every 30 minutes and demand a status report, you might have a good experience of drive by management as well :-).
The problem with singletons is that the places where you'd need them they're almost completely unnecessary and the people usually doing those "lol open this file and dump some data" in them are terrible coders too. Which makes the code that much more to read.
bowen on
not a doctor, not a lawyer, examples I use may not be fully researched so don't take out of context plz, don't @ me
The last time I did embedded programming I don't think we even had a working malloc.
It was really okay though, since usually the stack was good enough if we didn't want to make it permanently stored in memory.
We didn't use C++ though, just regular C.
No reason to use C++ on embedded systems usually, too much overhead for what amounts to code fluff. In my experience anyways. It was always toted that C is the de facto language for embedded devices. Unless Sun paid someone off to get Java on there.
It depends what you mean by "embedded device". Are you programming modern (strong) ARMs on the droid where there's a JavaVM? Yeh, thats not so much an embedded device as a slightly underpowered computer.
Are you working with PIC processors and microcontrollers? C and ASM are probably your doom, then.
As for C++ overhead - it's been greatly overblown how much C++ bloats the rendered ML. Yes, you have more context switches that end up throwing everything on and off the stack, but unless you're programming PICs, and are extremely memory constrained (like 64k), it just doesnt matter *in reality*.
Basically, unless you're in an EE program and are in a course entitled something along the lines of "Microcontrollers and blahblahblah", and have had to make your first NAND gate by hand, you probably have enough horsepower to not even notice the difference in compiled C and C++.
The only time in industry that I've been stuck there (on PICs) was a contract for a home security system firm, and that was 10 years ago. Reasonably powerful x86 (and now ARM - there is an incredible shift going on in processor technology due to handhelds) are so damn cheap that PICs are falling to the side.
The problem with singletons is that the places where you'd need them they're almost completely unnecessary and the people usually doing those "lol open this file and dump some data" in them are terrible coders too. Which makes the code that much more to read.
Then the problem is the coder, not the design pattern. Your architect should be looking for crap like that and gently correcting it with a sledgehammer.
Tell me about it. Singletons still add way too much fluff for what amounts to a data dumper anyways. In my example.
Sure you can have a singleton that calls a class that calls a method that dumps that data. Or you can just transpose all that code into main. Meh.
I've seen worse though. So has templewulf.
bowen on
not a doctor, not a lawyer, examples I use may not be fully researched so don't take out of context plz, don't @ me
0
Options
grouch993Both a man and a numberRegistered Userregular
edited October 2010
Sorry, temporary as in you need a std::string object in a function. So the guy pushing the change wants that declared in the class instead of just using a one off. It seems he thinks that initializing some number of objects in the class constructor will avoid a resource hit compared to doing it at point of use.
It seems like the extra memory taken up and copy constructors creating and destroying objects will make it all moot.
My previous post was vague because the person in question or some of his proteges may read this post and trace it back. Unpleasantness will follow.
The embedded system in question is a full blown PC running an RTOS kernel.
That is the level of optimization where unless you're pretty much done and have profiling results, you don't bother. Unless he can show that it makes a smattering of a difference (and it probably won't) there is no reason to complicate things like that, you lose some of the correctness and conciseness with the enlarged scope of the variable.
Can someone help me with a little Obj-C -> C translation?
How do I write
for (NSObject *object in nsArrayInstance)
{
//Do something to *object
}
where nsArrayInstance is instead
NSObject *objects[5];
Maybe it's not even possible... but basically I'd like to store an ObjC type in an array as stated and enumerate through it.
well... i'm not really up on obj-c, but if you have an array of struct (which is (excluding data access levels) all an object is, with some void(*)'s cast to functions in there for methods), you should be able to iterate over it just like an array of that struct. but i'm not sure that i understand your problem enough.
From how I understand it I guess he's trying to lower his footprint a little bit and using a standard array of objects rather than an OOP solution to an array.
The trickiest part is getting the size of the array, especially hard when dealing with pointers. Personally for sake of my sanity I'd just do something like:
const int ARRAYSIZE = 5;
NSObject *objects[ARRAYSIZE];
for(int i = 0; i < ARRAYSIZE; i++)
{
//do stuff with *objects[i];
}
bowen on
not a doctor, not a lawyer, examples I use may not be fully researched so don't take out of context plz, don't @ me
Yeah that's an option too. Ultimately that's less appealing than just using the array object. Too much code in this app to start slinging around constants that are likely to change throughout the life of the app.
No reason to use C++ on embedded systems usually, too much overhead for what amounts to code fluff. In my experience anyways. It was always toted that C is the de facto language for embedded devices. Unless Sun paid someone off to get Java on there.
What if your platform has a really shitty compiler though, then it might be better to port a VM over so you don't have to deal with it that often. I don't think Sun paid many people to implement the multitude of embedded java VMs out there, but they're there simply because it's a good VM spec without much strings attached (maybe not anymore now Oracle is getting sue happy though).
The only time in industry that I've been stuck there (on PICs) was a contract for a home security system firm, and that was 10 years ago. Reasonably powerful x86 (and now ARM - there is an incredible shift going on in processor technology due to handhelds) are so damn cheap that PICs are falling to the side.
This isn't really the case in my experience. 8-bit microcontrollers are still powerful enough for many embedded applications and much cheaper and easier to develop with than 32-bit procs. Obviously the choice is application specific, but I worked on 3 projects last year that used 8-bit uCs.
also using x86 on embedded devices is so dumb
so dumb
Embedded devices we usually mean something other than the mini computers we cart around in all sorts of devices now.
Those devices have grown beyond "embedded devices." There are still a lot of embedded devices that are at the same level as they've been for years. Just because we have phones with java doesn't mean those all went away.
No reason to use C++ on embedded systems usually, too much overhead for what amounts to code fluff. In my experience anyways. It was always toted that C is the de facto language for embedded devices. Unless Sun paid someone off to get Java on there.
What if your platform has a really shitty compiler though, then it might be better to port a VM over so you don't have to deal with it that often. I don't think Sun paid many people to implement the multitude of embedded java VMs out there, but they're there simply because it's a good VM spec without much strings attached (maybe not anymore now Oracle is getting sue happy though).
from what i understand the licensing issue with the JVM has to do with the full-blown version that loads every class imaginable at startup implementation versus the stripped down versions that are usable in embedded systems and only load whats needed. The first is free (as in beer), and the second is not, and is what Oracle is suing google over.
watch google create their own runtime, slightly alter the language and call it something different. i guarantee that google isn't going to be shaken down by Larry and co.
The only time in industry that I've been stuck there (on PICs) was a contract for a home security system firm, and that was 10 years ago. Reasonably powerful x86 (and now ARM - there is an incredible shift going on in processor technology due to handhelds) are so damn cheap that PICs are falling to the side.
This isn't really the case in my experience. 8-bit microcontrollers are still powerful enough for many embedded applications and much cheaper and easier to develop with than 32-bit procs. Obviously the choice is application specific, but I worked on 3 projects last year that used 8-bit uCs.
also using x86 on embedded devices is so dumb
so dumb
the answer to that is a gigantic "it depends on your application". uCs still have their place, and are still cheaper than the lowend ARMs and x86 boards (think VIA's pico boards), but it depends on what you're doing to decide the platform.
If I was getting paid I'd develop my own compiler. Because fuck non-standard compilers.
oh christ, you don't want to develop your own compiler. really, just use gcc with all the ANSI warnings on if you want true standards compliance. Intel's c compiler is very optimized for their chipsets, if you're looking for performance.
but you don't want to write your own compiler... because... ewwww.
Embedded devices we usually mean something other than the mini computers we cart around in all sorts of devices now.
Those devices have grown beyond "embedded devices." There are still a lot of embedded devices that are at the same level as they've been for years. Just because we have phones with java doesn't mean those all went away.
but i wouldn't call it a growth area, either...
the right tool for the right job. if all you need is a cheap little uC to do simple tasks as cheaply as possible, than thats what you use.
I've got a project that's starting to feel too spaghetti code-ish, so it seems like a good idea to organize everything with diagrams.
ugg, i don't really have a good opinion of UML, so I won't recommend any software for it, although if you want to pay IBM a shitton of money, they'll gladly sell you rational rose.
I'd go for some "mind mapping" software... plenty of open source out there.
watch google create their own runtime, slightly alter the language and call it something different. i guarantee that google isn't going to be shaken down by Larry and co.
Google aren't being sued over Java compliance, they're being done over patents on VM tech. They've already implemented their own VM, even if they were to start using Not-Java Oracle would still be pursuing them for the patents.
It is highly idiosyncratic but it does what I need it to do in terms of class diagrams and the like. It has a very command line feel to it despite also being drag and drop.
Alright folks, question for the hive mind here. I've got a few PDF files that I'd like to combine into a single file. Currently, I do this manually with either Adobe 8 Professional, or some software called PDFCreator, depending on which computer I'm using. These PDFs are part of a large report process that I've mostly automated using Excel/Access and VBA.
Ideally, if possible, I'd like to have some code in an Excel macro (or code from an Access DB, either works) that will take three existing PDFs and simply combine/append them into one. When I've asked others or google'd, it seems like this is usually accomplished with some third party software package, but I'd prefer not to cost this since it's just automating for convenience.
There's a pdf library for Python that can merge and split pdfs and stuff. I'm not sure if you can call Python scripts from a VB script (you probably can somehow) but at the very least if you're always combining it the same way, automating it with Python should be pretty simple.
Posts
Seems kind of pointless. Either you're going to have to create and destroy them anyway, or they cost no overhead whatsoever since C++ isn't memory-managed. All adding a local variable to a function does is increase the stack pointer offset for that function.
The post is pretty hard for me to decipher this late.
Clarification would help, but I think that's exactly the point. Instead of allocating and freeing memory for variables when needed just make them global so they're already instantiated and can be used and thus you save some cpu cycles. In my experience though on an embedded system memory is a more valuable resource than the processor and I'd imagine that your memory footprint would go through the roof if you implemented this everywhere. I'd also question how much you'd actually save and there have got to be better optimizations that you should implement first.
It was really okay though, since usually the stack was good enough if we didn't want to make it permanently stored in memory.
We didn't use C++ though, just regular C.
Anyone have any insight or know how to do it? The RFC documents are like Latin to me.
No reason to use C++ on embedded systems usually, too much overhead for what amounts to code fluff. In my experience anyways. It was always toted that C is the de facto language for embedded devices. Unless Sun paid someone off to get Java on there.
if you're using C++ and need a global variable, please consider using the Singleton Pattern. Same thing. Object-Oriented results.
Joe's Stream.
Heh, thats more real world than you'd think. If the professor would stop by your dorm every 30 minutes and demand a status report, you might have a good experience of drive by management as well :-).
Joe's Stream.
It depends what you mean by "embedded device". Are you programming modern (strong) ARMs on the droid where there's a JavaVM? Yeh, thats not so much an embedded device as a slightly underpowered computer.
Are you working with PIC processors and microcontrollers? C and ASM are probably your doom, then.
As for C++ overhead - it's been greatly overblown how much C++ bloats the rendered ML. Yes, you have more context switches that end up throwing everything on and off the stack, but unless you're programming PICs, and are extremely memory constrained (like 64k), it just doesnt matter *in reality*.
Basically, unless you're in an EE program and are in a course entitled something along the lines of "Microcontrollers and blahblahblah", and have had to make your first NAND gate by hand, you probably have enough horsepower to not even notice the difference in compiled C and C++.
The only time in industry that I've been stuck there (on PICs) was a contract for a home security system firm, and that was 10 years ago. Reasonably powerful x86 (and now ARM - there is an incredible shift going on in processor technology due to handhelds) are so damn cheap that PICs are falling to the side.
Joe's Stream.
Then the problem is the coder, not the design pattern. Your architect should be looking for crap like that and gently correcting it with a sledgehammer.
Joe's Stream.
Sure you can have a singleton that calls a class that calls a method that dumps that data. Or you can just transpose all that code into main. Meh.
I've seen worse though. So has templewulf.
It seems like the extra memory taken up and copy constructors creating and destroying objects will make it all moot.
My previous post was vague because the person in question or some of his proteges may read this post and trace it back. Unpleasantness will follow.
The embedded system in question is a full blown PC running an RTOS kernel.
How do I write
for (NSObject *object in nsArrayInstance)
{
//Do something to *object
}
where nsArrayInstance is instead
NSObject *objects[5];
Maybe it's not even possible... but basically I'd like to store an ObjC type in an array as stated and enumerate through it.
That's the only way I can think of to do that. Not sure if it'll work in obj-C, I know it works on primitive types.
well... i'm not really up on obj-c, but if you have an array of struct (which is (excluding data access levels) all an object is, with some void(*)'s cast to functions in there for methods), you should be able to iterate over it just like an array of that struct. but i'm not sure that i understand your problem enough.
Joe's Stream.
The trickiest part is getting the size of the array, especially hard when dealing with pointers. Personally for sake of my sanity I'd just do something like:
What if your platform has a really shitty compiler though, then it might be better to port a VM over so you don't have to deal with it that often. I don't think Sun paid many people to implement the multitude of embedded java VMs out there, but they're there simply because it's a good VM spec without much strings attached (maybe not anymore now Oracle is getting sue happy though).
also using x86 on embedded devices is so dumb
so dumb
Those devices have grown beyond "embedded devices." There are still a lot of embedded devices that are at the same level as they've been for years. Just because we have phones with java doesn't mean those all went away.
from what i understand the licensing issue with the JVM has to do with the full-blown version that loads every class imaginable at startup implementation versus the stripped down versions that are usable in embedded systems and only load whats needed. The first is free (as in beer), and the second is not, and is what Oracle is suing google over.
watch google create their own runtime, slightly alter the language and call it something different. i guarantee that google isn't going to be shaken down by Larry and co.
Joe's Stream.
the answer to that is a gigantic "it depends on your application". uCs still have their place, and are still cheaper than the lowend ARMs and x86 boards (think VIA's pico boards), but it depends on what you're doing to decide the platform.
Joe's Stream.
oh christ, you don't want to develop your own compiler. really, just use gcc with all the ANSI warnings on if you want true standards compliance. Intel's c compiler is very optimized for their chipsets, if you're looking for performance.
but you don't want to write your own compiler... because... ewwww.
Joe's Stream.
but i wouldn't call it a growth area, either...
the right tool for the right job. if all you need is a cheap little uC to do simple tasks as cheaply as possible, than thats what you use.
Joe's Stream.
I've got a project that's starting to feel too spaghetti code-ish, so it seems like a good idea to organize everything with diagrams.
ugg, i don't really have a good opinion of UML, so I won't recommend any software for it, although if you want to pay IBM a shitton of money, they'll gladly sell you rational rose.
I'd go for some "mind mapping" software... plenty of open source out there.
Joe's Stream.
Google aren't being sued over Java compliance, they're being done over patents on VM tech. They've already implemented their own VM, even if they were to start using Not-Java Oracle would still be pursuing them for the patents.
I made a game, it has penguins in it. It's pay what you like on Gumroad.
Currently Ebaying Nothing at all but I might do in the future.
I like UMLet - http://www.umlet.com/
It is highly idiosyncratic but it does what I need it to do in terms of class diagrams and the like. It has a very command line feel to it despite also being drag and drop.
I made a game, it has penguins in it. It's pay what you like on Gumroad.
Currently Ebaying Nothing at all but I might do in the future.
Ideally, if possible, I'd like to have some code in an Excel macro (or code from an Access DB, either works) that will take three existing PDFs and simply combine/append them into one. When I've asked others or google'd, it seems like this is usually accomplished with some third party software package, but I'd prefer not to cost this since it's just automating for convenience.
Any ideas/assistance would be greatly appreciated