Keurig adoption around here is super high. I guess it is because people are fucking animals and can't handle things like refilling coffee when it runs out and not leaving an empty carafe on the heating element.
Those things are great for single coffee drinkers (like myself) but for a group of people that's terrible.
0
Options
jackalFuck Yes. That is an orderly anal warehouse.Registered Userregular
Here we've got keurigs on every floor except the developer area. Us filthy devs just get plain old drip brewed coffee. It's pretty funny, though. Instead of regular and decaf our regular has a label on the handle that says "ewwww nasty" and it's used for the folgers packets that work provides and the decaf has a label that says "mmmm tasty" and it gets used for higher quality coffees that we buy ourselves and bring in.
The radio show I listen to on the way to work was talking about their keurig usage the other day. They've got a fancy one at the office that can tell how many cups were brewed and reports it, etc. Their keurig rep said that based on how many k-cups they were buying vs how many were actually being brewed, some ridiculous number like $20,000/yr worth of k-cups were being stolen and taken home.
We have keurig machines and dozens of flavours at work, but we recently got a few Jura C9 machines, and now the majority drink fresh ground espresso/coffee.
We have keurig machines and dozens of flavours at work, but we recently got a few Jura C9 machines, and now the majority drink fresh ground espresso/coffee.
The lab I worked in over the summer had one of those. I nearly killed myself before realizing I was drinking full mugs of espresso every morning.
Mike Danger"Diane..."a place both wonderful and strangeRegistered Userregular
We are one of the only offices on campus that provides free coffee to visitors. The light that comes into people's eyes when I tell them "yeah, you can have some", kind of makes me sad.
(I only drink hot chocolate every once in a while during the winter...never had the taste for tea or coffee)
We have keurig machines and dozens of flavours at work, but we recently got a few Jura C9 machines, and now the majority drink fresh ground espresso/coffee.
The lab I worked in over the summer had one of those. I nearly killed myself before realizing I was drinking full mugs of espresso every morning.
We have ours set to dispense 1.5 oz espresso and 5oz coffee servings so it is hard to get a full mug of espresso.
God fucking damn it. I'm back to an mvc engine I worked on before the new year, it's supposed to read a model template and pre-load routes with default functionality.....I have misplaced my template and haven't documented properly. ALWAYS DOCUMENT YOUR CODE, FOLKS.
Ugh... I have to schedule my Java certification exam. Either I take a vacation day and drive to the test center ~10 minutes away, or I take it on a Saturday and drive to Cincinnati roughly 55 minutes away.
Well, I did not embarrass myself during my interview with google. Thank crap we asked exactly the same questions the interviewer did when I taught algorithms last semester.
For those curious, what is the median of two sorted datasets? What's the time complexity, and when would you take a more naive approach and why?
Run you pigeons, it's Robert Frost!
0
Options
GnomeTankWhat the what?Portland, OregonRegistered Userregular
Windows 8 looks more and more like a Vista do-over to me. Meaning I'll be sticking to Windows 7 until they sunset it, because I refuse to have a tablet interface shoved down my throat for PC usage. Until Microsoft pulls their head out of their ass and realizes people still use their OS on PC's, they won't be getting any of my cash for Windows 8.
It should be log N of the smaller set. And you'd use the simple approach for where one of the input sets is very small
Yeah, there is also a cross-over point at which merging the datasets is better, which we talked about for a bit.
Afterwards, my office mates and I proceeded to have a huge argument over whether or not you should test for the special cases where the two sets are disjoint and getting the median is essentially constant time.
It should be log N of the smaller set. And you'd use the simple approach for where one of the input sets is very small
Yeah, there is also a cross-over point at which merging the datasets is better, which we talked about for a bit.
Afterwards, my office mates and I proceeded to have a huge argument over whether or not you should test for the special cases where the two sets are disjoint and getting the median is essentially constant time.
I don't see how a merge is ever better, the simple approach simply increments one index or the other (which I suppose is merging, it's just not producing any output. The disjoint check is simple, but it does pull in potentially two cache lines that otherwise would never be touched
Anyone else getting prompted to decide who pays the fees anymore? I paid infidel last night and I was never prompted, previously I picked and incurred the $0.51 fee. I apologize if not @infidel, I don't know what the fuck.
not a doctor, not a lawyer, examples I use may not be fully researched so don't take out of context plz, don't @ me
It should be log N of the smaller set. And you'd use the simple approach for where one of the input sets is very small
Yeah, there is also a cross-over point at which merging the datasets is better, which we talked about for a bit.
Afterwards, my office mates and I proceeded to have a huge argument over whether or not you should test for the special cases where the two sets are disjoint and getting the median is essentially constant time.
I don't see how a merge is ever better, the simple approach simply increments one index or the other (which I suppose is merging, it's just not producing any output. The disjoint check is simple, but it does pull in potentially two cache lines that otherwise would never be touched
Merge can be better if you don't happen to find the log n implementation that you know exists, but can construct an m log n solution off the top of your head, which was the case for me. I couldn't for the life of me remember how the binary search over both sets worked, and admitted I would have to look it up in a book.
The m log n solution is to just iterate over the smaller of the two arrays, updating a median pointer that you start with as you decide if the element you're inserting into the set causes the median index to move up or down. So, if you go with the less efficient implementation, you've suddenly got to think about whether to merge and take the median, or pretend you've merged and count the offset to the median based on the relationship of n and m.
The whole "should you do the tests or not" was more a discussion of which infinity was larger, the one where the sets overlap or the one where they don't. It was just an excuse to remember how to count big things.
seabass on
Run you pigeons, it's Robert Frost!
0
Options
jackalFuck Yes. That is an orderly anal warehouse.Registered Userregular
I assume by the time it is released the transition between tabby box land and real desktop will be smoother (it better be). I tried the dev preview a few weeks back and fuck that shit (as it is today).
What is really pissing me off is all of their betas don't support Vista which is some bullshit because Vista and 7 are pretty much the same OS.
0
Options
GnomeTankWhat the what?Portland, OregonRegistered Userregular
Are you sure that's not just O(n)? Merging sorted sets shouldn't be n log n in any situation
The merge is order n. If you've got two datasets where one is way smaller than the other,you can get away with an m log n approach, where m is smaller than n, and sometimes m log n is smaller than m + n.
The whole process has reminded me that I should occasionally take my nose out of a book and write some code.
Also, associative arrays are nice, but skip lists are where it's at.
It should be log N of the smaller set. And you'd use the simple approach for where one of the input sets is very small
Yeah, there is also a cross-over point at which merging the datasets is better, which we talked about for a bit.
Afterwards, my office mates and I proceeded to have a huge argument over whether or not you should test for the special cases where the two sets are disjoint and getting the median is essentially constant time.
I don't see how a merge is ever better, the simple approach simply increments one index or the other (which I suppose is merging, it's just not producing any output. The disjoint check is simple, but it does pull in potentially two cache lines that otherwise would never be touched
Merge can be better if you don't happen to find the log n implementation that you know exists, but can construct an m log n solution off the top of your head, which was the case for me. I couldn't for the life of me remember how the binary search over both sets worked, and admitted I would have to look it up in a book.
The m log n solution is to just iterate over the smaller of the two arrays, updating a median pointer that you start with as you decide if the element you're inserting into the set causes the median index to move up or down. So, if you go with the less efficient implementation, you've suddenly got to think about whether to merge and take the median, or pretend you've merged and count the offset to the median based on the relationship of n and m.
The whole "should you do the tests or not" was more a discussion of which infinity was larger, the one where the sets overlap or the one where they don't. It was just an excuse to remember how to count big things.
Off the top of my head I'm coming up with a cross-binary search of sorts. Two searches, halve the problem space, repeat.
It should be log N of the smaller set. And you'd use the simple approach for where one of the input sets is very small
Yeah, there is also a cross-over point at which merging the datasets is better, which we talked about for a bit.
Afterwards, my office mates and I proceeded to have a huge argument over whether or not you should test for the special cases where the two sets are disjoint and getting the median is essentially constant time.
I don't see how a merge is ever better, the simple approach simply increments one index or the other (which I suppose is merging, it's just not producing any output. The disjoint check is simple, but it does pull in potentially two cache lines that otherwise would never be touched
Merge can be better if you don't happen to find the log n implementation that you know exists, but can construct an m log n solution off the top of your head, which was the case for me. I couldn't for the life of me remember how the binary search over both sets worked, and admitted I would have to look it up in a book.
The m log n solution is to just iterate over the smaller of the two arrays, updating a median pointer that you start with as you decide if the element you're inserting into the set causes the median index to move up or down. So, if you go with the less efficient implementation, you've suddenly got to think about whether to merge and take the median, or pretend you've merged and count the offset to the median based on the relationship of n and m.
The whole "should you do the tests or not" was more a discussion of which infinity was larger, the one where the sets overlap or the one where they don't. It was just an excuse to remember how to count big things.
Off the top of my head I'm coming up with a cross-binary search of sorts. Two searches, halve the problem space, repeat.
Yeah, that's the answer. I just totally stone walled on it when I went to remember how to do it. It's funny because we asked the exact same question on our algorithms final, and I made the key for that without much trouble.
It should be log N of the smaller set. And you'd use the simple approach for where one of the input sets is very small
Yeah, there is also a cross-over point at which merging the datasets is better, which we talked about for a bit.
Afterwards, my office mates and I proceeded to have a huge argument over whether or not you should test for the special cases where the two sets are disjoint and getting the median is essentially constant time.
I don't see how a merge is ever better, the simple approach simply increments one index or the other (which I suppose is merging, it's just not producing any output. The disjoint check is simple, but it does pull in potentially two cache lines that otherwise would never be touched
Merge can be better if you don't happen to find the log n implementation that you know exists, but can construct an m log n solution off the top of your head, which was the case for me. I couldn't for the life of me remember how the binary search over both sets worked, and admitted I would have to look it up in a book.
The m log n solution is to just iterate over the smaller of the two arrays, updating a median pointer that you start with as you decide if the element you're inserting into the set causes the median index to move up or down. So, if you go with the less efficient implementation, you've suddenly got to think about whether to merge and take the median, or pretend you've merged and count the offset to the median based on the relationship of n and m.
The whole "should you do the tests or not" was more a discussion of which infinity was larger, the one where the sets overlap or the one where they don't. It was just an excuse to remember how to count big things.
Off the top of my head I'm coming up with a cross-binary search of sorts. Two searches, halve the problem space, repeat.
Yeah, that's the answer. I just totally stone walled on it when I went to remember how to do it. It's funny because we asked the exact same question on our algorithms final, and I made the key for that without much trouble.
Yay!
I haven't really thought about this one before so glad my instincts work.
Posts
Those things are great for single coffee drinkers (like myself) but for a group of people that's terrible.
Here we've got keurigs on every floor except the developer area. Us filthy devs just get plain old drip brewed coffee. It's pretty funny, though. Instead of regular and decaf our regular has a label on the handle that says "ewwww nasty" and it's used for the folgers packets that work provides and the decaf has a label that says "mmmm tasty" and it gets used for higher quality coffees that we buy ourselves and bring in.
The radio show I listen to on the way to work was talking about their keurig usage the other day. They've got a fancy one at the office that can tell how many cups were brewed and reports it, etc. Their keurig rep said that based on how many k-cups they were buying vs how many were actually being brewed, some ridiculous number like $20,000/yr worth of k-cups were being stolen and taken home.
The lab I worked in over the summer had one of those. I nearly killed myself before realizing I was drinking full mugs of espresso every morning.
The integration between devices seems pretty sexy.
Yeah, but the Metro <-> Desktop transitions are a bit clunky. I can't see enterprise users going for it at all.
As a tablet OS, it looks sexy as hell. Especially with its SkyDrive/XBox integration.
(I only drink hot chocolate every once in a while during the winter...never had the taste for tea or coffee)
We have ours set to dispense 1.5 oz espresso and 5oz coffee servings so it is hard to get a full mug of espresso.
:x
@Infidel did that work by the way?
Sorry, did which work?
My PM/payment :-P
New challenge: recreating the layout of the Excel spreadsheet that we've been using for invoices
ntfs undelete it. Should take you 5 mins.
Edit: Assuming there is a file....
For those curious, what is the median of two sorted datasets? What's the time complexity, and when would you take a more naive approach and why?
Yeah, there is also a cross-over point at which merging the datasets is better, which we talked about for a bit.
Afterwards, my office mates and I proceeded to have a huge argument over whether or not you should test for the special cases where the two sets are disjoint and getting the median is essentially constant time.
To bring this a bit more on topic, how does everyone feel about WinRT?
Don't assume bad intentions over neglect and misunderstanding.
It is deleted using the Java file.delete() method so I'm not sure if it's even possible.
Oh, yes, received alright.
I don't see how a merge is ever better, the simple approach simply increments one index or the other (which I suppose is merging, it's just not producing any output. The disjoint check is simple, but it does pull in potentially two cache lines that otherwise would never be touched
Merge can be better if you don't happen to find the log n implementation that you know exists, but can construct an m log n solution off the top of your head, which was the case for me. I couldn't for the life of me remember how the binary search over both sets worked, and admitted I would have to look it up in a book.
The m log n solution is to just iterate over the smaller of the two arrays, updating a median pointer that you start with as you decide if the element you're inserting into the set causes the median index to move up or down. So, if you go with the less efficient implementation, you've suddenly got to think about whether to merge and take the median, or pretend you've merged and count the offset to the median based on the relationship of n and m.
The whole "should you do the tests or not" was more a discussion of which infinity was larger, the one where the sets overlap or the one where they don't. It was just an excuse to remember how to count big things.
What is really pissing me off is all of their betas don't support Vista which is some bullshit because Vista and 7 are pretty much the same OS.
It is. file.delete() just maps to an OS delete call, which eventually filters down to a raw NTFS call to delete the file. NTFS undelete should work.
Maybe you guys should've used a associative array.
Also stroustrup's C++ book is a funny read because he jumps all over the god damned place.
The merge is order n. If you've got two datasets where one is way smaller than the other,you can get away with an m log n approach, where m is smaller than n, and sometimes m log n is smaller than m + n.
The whole process has reminded me that I should occasionally take my nose out of a book and write some code.
Also, associative arrays are nice, but skip lists are where it's at.
Off the top of my head I'm coming up with a cross-binary search of sorts. Two searches, halve the problem space, repeat.
Yeah, that's the answer. I just totally stone walled on it when I went to remember how to do it. It's funny because we asked the exact same question on our algorithms final, and I made the key for that without much trouble.
Yay!
I haven't really thought about this one before so glad my instincts work.
Just updated a huge number of packages, mainly developer ones (impacting Mono, Ruby, XML/SSL libraries, etc.)
Looks like there was a security fix that impacted a lot of things.
See anything funny or broken let me know.