You are all spoiled if the majority of your information is sorted. We have to use MPI forms of distrubted sorts if we need to sort data. Mainly we avoid it as IO is such a massive bottleneck.
Which is practically the only time I deal with unsorted information, when I deal with files. Even then, what I do doesn't really need to be sorted if I do grab info from IO, just processed. I mean I could sort it but that's potentially useless.
bowen on
not a doctor, not a lawyer, examples I use may not be fully researched so don't take out of context plz, don't @ me
Hey, any POSIX gurus in the house? I've got a problem.
I've got this hardware driver, we'll call it The Driver. It puts data in a big chunk of memory and then has a function that gives my code a pointer into this big chunk of memory.
I've got this communication library. It gives me a pointer to a buffer where I put data, and then I can publish this data with a different function.
I'd really love it if that buffer from the driver and that buffer from the library were actually pointing at the same piece of memory so I don't need to waste time with memcpy() over and over. I feel like this should be possible with mmap() or something but I don't know where to start.
(No, I can't modify the driver or the library, and yes, I need to use them).
Since you don't get to tell either the driver or the library what the pointer should be, mmap won't help you. It sounds like the driver isn't doing things in quite the UNIX way either, since ideally it should have just been a file in /dev you'd read() or mmap() to get at said buffer. If it was that, you could get the address from the library, and use it as the destination for the read() call.
/*Enumerated type creates set of constants numbered 0 and upward */
typedef enum{MODE_A,MODE_B,MODE_C,MODE_D,MODE_E} mode_t;
int switch3(int *p1, int*p2,mode_t action)
{
int result = 0
switch(action) {
case MODE_A:
result = *p1;
p1 = *p2;
p2 = p1;
*p2 = p1;
case MODE_B:
result = *p1 + *p2;
*p2 = result;
break;
case MODE_C:
*p2 = 15;
result = *p1;
break;
case MODE_D:
*p2 = *p1;
result = 17;
break;
case MODE_E:
result = 17;
break;
default:
;
}
return result;
}
That's my best guess. Looks a bit woobly in A, but I'm assuming there's a bit left off in line 5.
A would be:
result = *p1;
*p1 = *p2;
Which makes me suspicious that this isn't even compiler generated code, a compiler would be unlikely to do that, it would do:
edit: unless optimizations are turned off I suppose
Thanks for your help guys! I really appreciate it. I tried running/compiling it myself and I couldn't do so.
It would help if I had a machine with UNIX. I'm going to run this by the TA and see if it all checks out.
Thanks for your help guys! I really appreciate it. I tried running/compiling it myself and I couldn't do so.
It would help if I had a machine with UNIX. I'm going to run this by the TA and see if it all checks out.
Thanks for your help again!
Every now and then, I wonder kind of code would generate that sort of function, and so far, I've come up with:
typedef enum enumToolType
{
tool_PullNail_DoScrew,
tool_UseNailAsScrew_WithRandomNumber,
tool_RandomNumber,
} tool_t;
int swiss_army_knife( int *pNail, int *pScrew, tool_t toolType )
{
int result = 0;
switch( toolType )
{
tool_PullNail_DoScrew:
*pScrew = 15;
result = *pNail;
break;
tool_UseNailAsScrew_WithRandomNumber:
*pScrew = *pNail;
result = 17;
break;
tool_RandomNumber:
result = 17; // Was [URL="http://xkcd.com/221/"]randomly chosen[/URL]
break;
default:
}
return result;
}
Hey, any POSIX gurus in the house? I've got a problem.
I've got this hardware driver, we'll call it The Driver. It puts data in a big chunk of memory and then has a function that gives my code a pointer into this big chunk of memory.
I've got this communication library. It gives me a pointer to a buffer where I put data, and then I can publish this data with a different function.
I'd really love it if that buffer from the driver and that buffer from the library were actually pointing at the same piece of memory so I don't need to waste time with memcpy() over and over. I feel like this should be possible with mmap() or something but I don't know where to start.
(No, I can't modify the driver or the library, and yes, I need to use them).
Buffers are a good thing, you'll want to continue copying memory.
What if performance is a concern? Then you really want to have as few copies as possible, or use something like DMA to have no copies at all.
I assume a DMA-like thing is what Daedalus wants, but it looks like since the buffers are accessed as pointers and not FDs or anything like that he's out of luck, mmap() wise.
Hey, any POSIX gurus in the house? I've got a problem.
I've got this hardware driver, we'll call it The Driver. It puts data in a big chunk of memory and then has a function that gives my code a pointer into this big chunk of memory.
I've got this communication library. It gives me a pointer to a buffer where I put data, and then I can publish this data with a different function.
I'd really love it if that buffer from the driver and that buffer from the library were actually pointing at the same piece of memory so I don't need to waste time with memcpy() over and over. I feel like this should be possible with mmap() or something but I don't know where to start.
(No, I can't modify the driver or the library, and yes, I need to use them).
Buffers are a good thing, you'll want to continue copying memory.
What if performance is a concern? Then you really want to have as few copies as possible, or use something like DMA to have no copies at all.
I assume a DMA-like thing is what Daedalus wants, but it looks like since the buffers are accessed as pointers and not FDs or anything like that he's out of luck, mmap() wise.
The point is they're buffers and need to be treated as such.
They are drivers and libraries, I'd really like to see the profiling and requirements here to justify a performance concern if that is what it is.
Because it seems like overeager optimization that is completely out of line with the principle of don't touch the blackbox. You don't want to know or care what those libraries are doing with their own buffers.
Thanks for your help guys! I really appreciate it. I tried running/compiling it myself and I couldn't do so.
It would help if I had a machine with UNIX. I'm going to run this by the TA and see if it all checks out.
If you'll be doing anything like this in the future, I strongly advise you to remedy this. Having local access to the *nix toolchains is incredibly helpful.
Linden on
0
Options
Monkey Ball WarriorA collection of mediocre hatsSeattle, WARegistered Userregular
Thanks for your help guys! I really appreciate it. I tried running/compiling it myself and I couldn't do so.
It would help if I had a machine with UNIX. I'm going to run this by the TA and see if it all checks out.
If you'll be doing anything like this in the future, I strongly advise you to remedy this. Having local access to the *nix toolchains is incredibly helpful.
I have a linux partition, but in my win7 partition I also have a VirtualBox image of a linux install so I can poke around in a linux enviroment without rebooting out of windows. This is a remarkably effective solution. Though if you are messing with hardware drivers and stuff, obviously that won't work.
Monkey Ball Warrior on
"I resent the entire notion of a body as an ante and then raise you a generalized dissatisfaction with physicality itself" -- Tycho
Thanks for your help guys! I really appreciate it. I tried running/compiling it myself and I couldn't do so.
It would help if I had a machine with UNIX. I'm going to run this by the TA and see if it all checks out.
If you'll be doing anything like this in the future, I strongly advise you to remedy this. Having local access to the *nix toolchains is incredibly helpful.
I have a linux partition, but in my win7 partition I also have a VirtualBox image of a linux install so I can poke around in a linux enviroment without rebooting out of windows. This is a remarkably effective solution. Though if you are messing with hardware drivers and stuff, obviously that won't work.
If you just want to have a gcc toolchain with a relatively decent bash shell on Windows, two very common solutions are:
1.) Cygwin - is a very comprehensive environment that comes with gcc and bash, and ports of a lot of common *nix libraries. However, the "cost" of that is that any executable created by cygwin must also have the Cygwin DLLs redistributed with it. Not a big deal in the long run.
2.) MinGW with MSYS. MinGW is the gcc compiler, ported to native Win32. MSYS is the bash shell environment. Very minimalist. Executables created by MinGW's gcc can run without additional DLLs* - but if you're porting, you're going to be looking at a lot of effort since MinGW/MSYS doesn't come with the same comprehensive ports as Cygwin does.
* On most modern systems. From memory, they were linked to the VC++ 6.0 C run time? Pretty much every Windows install has that now.
Hey, any POSIX gurus in the house? I've got a problem.
I've got this hardware driver, we'll call it The Driver. It puts data in a big chunk of memory and then has a function that gives my code a pointer into this big chunk of memory.
I've got this communication library. It gives me a pointer to a buffer where I put data, and then I can publish this data with a different function.
I'd really love it if that buffer from the driver and that buffer from the library were actually pointing at the same piece of memory so I don't need to waste time with memcpy() over and over. I feel like this should be possible with mmap() or something but I don't know where to start.
(No, I can't modify the driver or the library, and yes, I need to use them).
Buffers are a good thing, you'll want to continue copying memory.
What if performance is a concern? Then you really want to have as few copies as possible, or use something like DMA to have no copies at all.
I assume a DMA-like thing is what Daedalus wants, but it looks like since the buffers are accessed as pointers and not FDs or anything like that he's out of luck, mmap() wise.
The point is they're buffers and need to be treated as such.
They are drivers and libraries, I'd really like to see the profiling and requirements here to justify a performance concern if that is what it is.
Because it seems like overeager optimization that is completely out of line with the principle of don't touch the blackbox. You don't want to know or care what those libraries are doing with their own buffers.
We sort of know what those libraries are doing, as they're written by other departments in the same company. And we didn't spend this money on all this Infiniband hardware to lose latency to having to do a hundred-megabyte memcpy before spitting data on the network. Current plan is to figure out which shmem segments correspond to these buffers under the hood and have some other process (that starts up earlier) set these to the same thing, or something.
If you're pushing 100mb distributed over a network bus, I really doubt a main memory to main memory copy is your bottleneck?
This is a common issue when working on clusters which need to move massive amounts of data. The fact is that all known clusters, super computers etc are all IO bound. So the primary concern for people in the HPC area is developing better IO algorithms and reducing total amount of communication as the advancements in IO technology isn't moving fast enough.
So the issue here is that the memcpy will become a delay in trying to saturate the infiniband network which means that you waste tons of energy, cpu cycles, allocation time, money on an unneeded copy. This delay becomes even more clear when working with MPI blocking calls.
Now if you are on what an SGI box you can might be able to use SHMEM to share the mem references, but why don't you just move the pointer the buffer gives you to the driver buffer or is it a const pointer?
...(Duff's Device) may also interfere with pipelining and branch prediction on some architectures... When numerous instances of Duff's device were removed from the XFree86 Server in version 4.0, there was an improvement in performance.
When I look at what was done there, I can see how it would have been awesome back before branch predictors were any good.
But anyway, the take home message seems to be that, on mainstream desktop architectures, memcpy is optimized out the wazzu and one is unlikely to do better without breaking out some assembly. But at the same time, rewriting standard library stuff that a profiler complains about, to be more specific to the task at hand or target machine, may in fact be totally worth it.
Monkey Ball Warrior on
"I resent the entire notion of a body as an ante and then raise you a generalized dissatisfaction with physicality itself" -- Tycho
The point is they're buffers and need to be treated as such.
They are drivers and libraries, I'd really like to see the profiling and requirements here to justify a performance concern if that is what it is.
Because it seems like overeager optimization that is completely out of line with the principle of don't touch the blackbox. You don't want to know or care what those libraries are doing with their own buffers.
We sort of know what those libraries are doing, as they're written by other departments in the same company. And we didn't spend this money on all this Infiniband hardware to lose latency to having to do a hundred-megabyte memcpy before spitting data on the network. Current plan is to figure out which shmem segments correspond to these buffers under the hood and have some other process (that starts up earlier) set these to the same thing, or something.
I'm tending to agree with Infidel here - once you start poking around like that, you'll need to start worrying about crazy factors.
For example, are either of the buffers given by the driver and library are actually *in* system RAM, as opposed to dedicated RAM on various pieces of hardware that have just been memory mapped for ease of access? If either of them point to hardware memory, someone somewhere is going to need to do a memcpy().
Even if both of them are in system RAM, then you'll need to worry about if either the driver or the library use hardware DMA of some kind, and if they do, you'll need to mark the buffers as non-pageable.
Do either components double buffer? Do some trickery behind the scenes where the same virtual address is swapped out to more than one physical address? If so, how does that affect the other component if you give them the virtual address?
What if the communications library is delayed for some reason - how will the driver know not to overwrite what it thinks is its own buffer?
Since they're written in-company, the best solution (honest!) is to ask the guys who wrote it if you can get an API which explicitly lets you specify the buffer they should write to/read from, instead of trying to second guess them.
There may be no good reason why they each allocate their own buffer - or there may be some very good hardware based reason. Really, just ask the guys who developed it!
...(Duff's Device) may also interfere with pipelining and branch prediction on some architectures... When numerous instances of Duff's device were removed from the XFree86 Server in version 4.0, there was an improvement in performance.
When I look at what was done there, I can see how it would have been awesome back before branch predictors were any good.
But anyway, the take home message seems to be that, on mainstream desktop architectures, memcpy is optimized out the wazzu and one is unlikely to do better without breaking out some assembly. But at the same time, rewriting standard library stuff that a profiler complains about, to be more specific to the task at hand or target machine, may in fact be totally worth it.
This reminds of the debate I saw on one of my company's mailers once when folks realized that memcpy() was optimized on our platforms, but bcopy() was not. (Or vice versa.) That was fun.
Since they're written in-company, the best solution (honest!) is to ask the guys who wrote it if you can get an API which explicitly lets you specify the buffer they should write to/read from, instead of trying to second guess them.
There may be no good reason why they each allocate their own buffer - or there may be some very good hardware based reason. Really, just ask the guys who developed it!
This was the first approach, and we were told there was no budget for this, so instead we're hacking around it. The driver needs to have the memory in a specific place; the comm library really doesn't. I don't know why it provides a pointer rather than acting like every other networking library ever and taking a pointer as an argument, but it isn't strictly necessary. I was wondering if there was an easy mmap-based hack to get this working; there isn't, so we're doing something slightly more complicated.
By the way, if you're operating under the assumption that memcpy is heavily optimized everywhere, try measuring the performance of the GNU glibc memcpy when you're running on Cell. It's shockingly bad.
Anyone have a good site to learn ML? Because the notes I have are confusing as shit and I'm not making any progress. I'm still stuck at the very beginning.
Basically, I have to create a set without using lists, in ML. I have to create my own abstract data types and constructors. So I have the following to work off:
signature SetADT = sig
type ''a set
(* constructor *)
val newSet : unit -> ''a set
(* predicates *)
val isEmpty : ''a set -> bool
val isMember : ''a * ''a set -> bool
val equals : ''a set * ''a set -> bool
(* operators *)
val add : ''a * ''a set -> ''a set
val remove : ''a * ''a set -> ''a set
val intersect : ''a set * ''a set -> ''a set
val union : ''a set * ''a set -> ''a set
val diff : ''a set * ''a set -> ''a set
val card : ''a set -> int
val explode : ''a set -> ''a list
end;
I understand that since that's the signature file, it's the interface and I need to create the implementation. So I have the following in another file:
structure SetADT :> SetADT = struct
abstype ''a set =
And that's about where I'm stuck. Do I first have to define in another file what "set" is? Or do I just add a line that's like
type set = stack
???
By the way, if you're operating under the assumption that memcpy is heavily optimized everywhere, try measuring the performance of the GNU glibc memcpy when you're running on Cell. It's shockingly bad.
Yeah, that's the problem. I don't know of any guide that says "oh yeah, these memcpy()s are good and these are terrible", so I generally just assume it isn't optimized.
Language: Ruby Framework: Rails Purpose: Developing web applications
Ruby on Rails (RoR or often just called 'Rails') is a web application framework with a practical slant. While most frameworks present themselves as a sort of toolbox, Rails goes a step further by favoring convention over configuration. Instead of configuring how the tools interact with each other yourself, Rails infers what you mean to do from a few naming conventions in your class, method, table and path names. If it gets in the way, you can always define what name it should look for instead yourself.
Rails uses the model-view-controller (MVC) architectural pattern to separate the concerns in your code. On the controller side, it favors RESTful style url method coupling. On the model side, it provides an object oriented representation of your database tables. For the views, it provides a templating engine called ERB (I prefer HAML though).
One of the best things of Rails is the developer community. A lot of Rails developers blog about their experiences or post their problems on Stack Overflow. There also is a sort of package manager/repository for Ruby libraries called RubyGems that helps you install, update and resolve dependencies. For configuring what gems you use in your Rails project, you should use Bundler (which is baked into Rails 3). Most gems can be found on github for easy forking.
I can heartily recommend Rails to everyone looking for an easy to use web application framework. It's as easy as "sudo apt-get install rails && rails new ~/myproject".
I'm looking for a good 3D game framework to futz around with... that can run on the mac.
A lot of the good options I'm finding are either too low level (direct OpenGL/C) for the amount of time I have, or so abstract that I'm not really learning anything (Unity).
I've been searching around for Python community for a good 3D engine but they all either dependency hell, have no explicit OS X support, or are too primitive and poorly documented to be worth my time.
Ideally what I want is something very close to XNA but usable on OS X.
The only serious 3d engine for python that i know of is Panda3D. Never actually used it myself though. If you want to be on the cutting edge you should look into WebGL, I'm interested in a framework/utility library for that myself. Using custom HTML elements as a sort of data definition language for 3D scenes would be nifty.
I'm looking for a good 3D game framework to futz around with... that can run on the mac.
A lot of the good options I'm finding are either too low level (direct OpenGL/C) for the amount of time I have, or so abstract that I'm not really learning anything (Unity).
I've been searching around for Python community for a good 3D engine but they all either dependency hell, have no explicit OS X support, or are too primitive and poorly documented to be worth my time.
Ideally what I want is something very close to XNA but usable on OS X.
Pygame is on my to have a play with one of these days. I'm assuming you've checked it out, which end of the scale does it fall on? On a related note, the 3d screenshots I've seen from it were mediocre. Is that more likely a lack of art talent of the devs or can pygame just not do very high res 3d models?
Language: Ruby Framework: Rails Purpose: Developing web applications
Ruby on Rails (RoR or often just called 'Rails') is a web application framework with a practical slant. While most frameworks present themselves as a sort of toolbox, Rails goes a step further by favoring convention over configuration. Instead of configuring how the tools interact with each other yourself, Rails infers what you mean to do from a few naming conventions in your class, method, table and path names. If it gets in the way, you can always define what name it should look for instead yourself.
Rails uses the model-view-controller (MVC) architectural pattern to separate the concerns in your code. On the controller side, it favors RESTful style url method coupling. On the model side, it provides an object oriented representation of your database tables. For the views, it provides a templating engine called ERB (I prefer HAML though).
One of the best things of Rails is the developer community. A lot of Rails developers blog about their experiences or post their problems on Stack Overflow. There also is a sort of package manager/repository for Ruby libraries called RubyGems that helps you install, update and resolve dependencies. For configuring what gems you use in your Rails project, you should use Bundler (which is baked into Rails 3). Most gems can be found on github for easy forking.
I can heartily recommend Rails to everyone looking for an easy to use web application framework. It's as easy as "sudo apt-get install rails && rails new ~/myproject".
Rails is also the driving force behind Microsoft's ASP.NET MVC. MS lifted a lot from Rails.
Is this guy serious? Wish I could work long and hard hours and get paid (probably) shit! Oh wait, I don't, because I deliver software faster, so I meet my deadlines because I don't reinvent the wheel every other Thursday.
What a dingus. He probably thinks the same thing about boost or cocoa, or (insert framework here).
bowen on
not a doctor, not a lawyer, examples I use may not be fully researched so don't take out of context plz, don't @ me
0
Options
The AnonymousUh, uh, uhhhhhh...Uh, uh.Registered Userregular
Is this guy serious? Wish I could work long and hard hours and get paid (probably) shit! Oh wait, I don't, because I deliver software faster, so I meet my deadlines because I don't reinvent the wheel every other Thursday.
What a dingus. He probably thinks the same thing about boost or cocoa, or (insert framework here).
I now believe the company is just doing this crap for PR. First it was the crazy "We can't find any good programers" post on reddit, and now this CEO drivel.
Read his other post. Guy wants people who work like 50 work weeks. Judging by his staff, it's not surprising that he can get away with this. Looks like recent grads, or, H1B workers where you can pretty much force them to work 15 hour days or else, deported, olol.
I mean if that picture of the dude with the laptop is their actual work environment, wow. Dude keeps skirting around when commentators ask what he's paying these people for 50 hour work weeks.
Edit: Ethea, yeah that's what I'm thinking too.
bowen on
not a doctor, not a lawyer, examples I use may not be fully researched so don't take out of context plz, don't @ me
Yeah, from their pictures... wow that's one shitty office to be looking for top programmers!
"every year we take the company overseas for a month (on your own dime, sorry) and work incredibly hard while having a ton of fun."
Working long, hard hours with all the expenses of a vacation! (Not to mention work visas?)
"Everyone helps with tech support, schmoozing at swank parties, hosting events, coming up with new and ever-more-ridiculous marketing stunts, etc. And if you code, you'll code everything: you might do mobile one day, front-end design, back-end optimization, low-level debugging, the works."
Specialization is for chumps, everybody can switch from web development to UI design to assembly instantly.
Posts
Since you don't get to tell either the driver or the library what the pointer should be, mmap won't help you. It sounds like the driver isn't doing things in quite the UNIX way either, since ideally it should have just been a file in /dev you'd read() or mmap() to get at said buffer. If it was that, you could get the address from the library, and use it as the destination for the read() call.
Thanks for your help guys! I really appreciate it. I tried running/compiling it myself and I couldn't do so.
It would help if I had a machine with UNIX. I'm going to run this by the TA and see if it all checks out.
Thanks for your help again!
Every now and then, I wonder kind of code would generate that sort of function, and so far, I've come up with:
What if performance is a concern? Then you really want to have as few copies as possible, or use something like DMA to have no copies at all.
I assume a DMA-like thing is what Daedalus wants, but it looks like since the buffers are accessed as pointers and not FDs or anything like that he's out of luck, mmap() wise.
The point is they're buffers and need to be treated as such.
They are drivers and libraries, I'd really like to see the profiling and requirements here to justify a performance concern if that is what it is.
Because it seems like overeager optimization that is completely out of line with the principle of don't touch the blackbox. You don't want to know or care what those libraries are doing with their own buffers.
If you'll be doing anything like this in the future, I strongly advise you to remedy this. Having local access to the *nix toolchains is incredibly helpful.
I have a linux partition, but in my win7 partition I also have a VirtualBox image of a linux install so I can poke around in a linux enviroment without rebooting out of windows. This is a remarkably effective solution. Though if you are messing with hardware drivers and stuff, obviously that won't work.
If you just want to have a gcc toolchain with a relatively decent bash shell on Windows, two very common solutions are:
1.) Cygwin - is a very comprehensive environment that comes with gcc and bash, and ports of a lot of common *nix libraries. However, the "cost" of that is that any executable created by cygwin must also have the Cygwin DLLs redistributed with it. Not a big deal in the long run.
2.) MinGW with MSYS. MinGW is the gcc compiler, ported to native Win32. MSYS is the bash shell environment. Very minimalist. Executables created by MinGW's gcc can run without additional DLLs* - but if you're porting, you're going to be looking at a lot of effort since MinGW/MSYS doesn't come with the same comprehensive ports as Cygwin does.
* On most modern systems. From memory, they were linked to the VC++ 6.0 C run time? Pretty much every Windows install has that now.
We sort of know what those libraries are doing, as they're written by other departments in the same company. And we didn't spend this money on all this Infiniband hardware to lose latency to having to do a hundred-megabyte memcpy before spitting data on the network. Current plan is to figure out which shmem segments correspond to these buffers under the hood and have some other process (that starts up earlier) set these to the same thing, or something.
This smells like "don't measure, cut randomly" more than "measure twice, cut once"
This is a common issue when working on clusters which need to move massive amounts of data. The fact is that all known clusters, super computers etc are all IO bound. So the primary concern for people in the HPC area is developing better IO algorithms and reducing total amount of communication as the advancements in IO technology isn't moving fast enough.
So the issue here is that the memcpy will become a delay in trying to saturate the infiniband network which means that you waste tons of energy, cpu cycles, allocation time, money on an unneeded copy. This delay becomes even more clear when working with MPI blocking calls.
Now if you are on what an SGI box you can might be able to use SHMEM to share the mem references, but why don't you just move the pointer the buffer gives you to the driver buffer or is it a const pointer?
Because that's not just going to go away.
And I still haven't heard any word of profiling.
What exactly are you expecting more from this thread?
First i'm like, huh, what is memcpy's perfomance like anyway.
Then I came across this, then this disturbing thing. I was particularly amused by
When I look at what was done there, I can see how it would have been awesome back before branch predictors were any good.
But anyway, the take home message seems to be that, on mainstream desktop architectures, memcpy is optimized out the wazzu and one is unlikely to do better without breaking out some assembly. But at the same time, rewriting standard library stuff that a profiler complains about, to be more specific to the task at hand or target machine, may in fact be totally worth it.
I'm tending to agree with Infidel here - once you start poking around like that, you'll need to start worrying about crazy factors.
For example, are either of the buffers given by the driver and library are actually *in* system RAM, as opposed to dedicated RAM on various pieces of hardware that have just been memory mapped for ease of access? If either of them point to hardware memory, someone somewhere is going to need to do a memcpy().
Even if both of them are in system RAM, then you'll need to worry about if either the driver or the library use hardware DMA of some kind, and if they do, you'll need to mark the buffers as non-pageable.
Do either components double buffer? Do some trickery behind the scenes where the same virtual address is swapped out to more than one physical address? If so, how does that affect the other component if you give them the virtual address?
What if the communications library is delayed for some reason - how will the driver know not to overwrite what it thinks is its own buffer?
Since they're written in-company, the best solution (honest!) is to ask the guys who wrote it if you can get an API which explicitly lets you specify the buffer they should write to/read from, instead of trying to second guess them.
There may be no good reason why they each allocate their own buffer - or there may be some very good hardware based reason. Really, just ask the guys who developed it!
This reminds of the debate I saw on one of my company's mailers once when folks realized that memcpy() was optimized on our platforms, but bcopy() was not. (Or vice versa.) That was fun.
This was the first approach, and we were told there was no budget for this, so instead we're hacking around it. The driver needs to have the memory in a specific place; the comm library really doesn't. I don't know why it provides a pointer rather than acting like every other networking library ever and taking a pointer as an argument, but it isn't strictly necessary. I was wondering if there was an easy mmap-based hack to get this working; there isn't, so we're doing something slightly more complicated.
By the way, if you're operating under the assumption that memcpy is heavily optimized everywhere, try measuring the performance of the GNU glibc memcpy when you're running on Cell. It's shockingly bad.
Is there room for that in your budget though?
Basically, I have to create a set without using lists, in ML. I have to create my own abstract data types and constructors. So I have the following to work off:
I understand that since that's the signature file, it's the interface and I need to create the implementation. So I have the following in another file:
And that's about where I'm stuck. Do I first have to define in another file what "set" is? Or do I just add a line that's like
type set = stack
???
Code is allowed to make sense, budgets are not.
Godspeed.
Yeah, that's the problem. I don't know of any guide that says "oh yeah, these memcpy()s are good and these are terrible", so I generally just assume it isn't optimized.
Framework: Rails
Purpose: Developing web applications
Ruby on Rails (RoR or often just called 'Rails') is a web application framework with a practical slant. While most frameworks present themselves as a sort of toolbox, Rails goes a step further by favoring convention over configuration. Instead of configuring how the tools interact with each other yourself, Rails infers what you mean to do from a few naming conventions in your class, method, table and path names. If it gets in the way, you can always define what name it should look for instead yourself.
Rails uses the model-view-controller (MVC) architectural pattern to separate the concerns in your code. On the controller side, it favors RESTful style url method coupling. On the model side, it provides an object oriented representation of your database tables. For the views, it provides a templating engine called ERB (I prefer HAML though).
One of the best things of Rails is the developer community. A lot of Rails developers blog about their experiences or post their problems on Stack Overflow. There also is a sort of package manager/repository for Ruby libraries called RubyGems that helps you install, update and resolve dependencies. For configuring what gems you use in your Rails project, you should use Bundler (which is baked into Rails 3). Most gems can be found on github for easy forking.
I can heartily recommend Rails to everyone looking for an easy to use web application framework. It's as easy as "sudo apt-get install rails && rails new ~/myproject".
Snuck in!
A lot of the good options I'm finding are either too low level (direct OpenGL/C) for the amount of time I have, or so abstract that I'm not really learning anything (Unity).
I've been searching around for Python community for a good 3D engine but they all either dependency hell, have no explicit OS X support, or are too primitive and poorly documented to be worth my time.
Ideally what I want is something very close to XNA but usable on OS X.
EDIT:
This is embarassing, that last line should say "sudo apt-get install rails && rails new ~/myproject" instead of what it said.
Rails is also the driving force behind Microsoft's ASP.NET MVC. MS lifted a lot from Rails.
Is this guy serious? Wish I could work long and hard hours and get paid (probably) shit! Oh wait, I don't, because I deliver software faster, so I meet my deadlines because I don't reinvent the wheel every other Thursday.
What a dingus. He probably thinks the same thing about boost or cocoa, or (insert framework here).
I now believe the company is just doing this crap for PR. First it was the crazy "We can't find any good programers" post on reddit, and now this CEO drivel.
I mean if that picture of the dude with the laptop is their actual work environment, wow. Dude keeps skirting around when commentators ask what he's paying these people for 50 hour work weeks.
Edit: Ethea, yeah that's what I'm thinking too.
"every year we take the company overseas for a month (on your own dime, sorry) and work incredibly hard while having a ton of fun."
Working long, hard hours with all the expenses of a vacation! (Not to mention work visas?)
"Everyone helps with tech support, schmoozing at swank parties, hosting events, coming up with new and ever-more-ridiculous marketing stunts, etc. And if you code, you'll code everything: you might do mobile one day, front-end design, back-end optimization, low-level debugging, the works."
Specialization is for chumps, everybody can switch from web development to UI design to assembly instantly.
Seems like they want VB programmers.
(Wait, I haven't ever seen that at all!)