I have not heard of those, no. I wonder if they would be easier or harder to solve than sudokus?
Once cool thing is that you could actually decompose a large bitmap into a picross, though I'm not sure if it will work for any bitmap or if it needs to be constrained
I have not heard of those, no. I wonder if they would be easier or harder to solve than sudokus?
Once cool thing is that you could actually decompose a large bitmap into a picross, though I'm not sure if it will work for any bitmap or if it needs to be constrained
You can't decompose any bitmap as you can end up with ambiguous clues.
I wrote a picross solver several years ago. It only did first level logic though (map all possible combinations for a row, find the guaranteed solids/gaps, then repeat with any columns that intersect the new information. Iterate that until nothing more can be worked out) so plenty of scope to improve it.
So my naive "generate-all-the-things-then--filter-it-down" approach to the problem takes a hilarious 10 seconds for a 15 item list (and it can degenerate from there to 30 seconds plus). On the same list the graph construction approach takes 0 seconds. And I know there is at least one major improvement I can make.
Python code in the spoiler
class Node:
def __init__(self, match, label):
self.parents = []
self.match = match
self.children = []
self.label = label
def add_child(self, node):
self.children.append(node)
node.parents.append(self)
def is_leaf(self):
return len(self.children) == 0
def __eq__(self, other):
return self.match == other.match \
and self.parents == other.parents \
and self.children == other.children \
if isinstance(other, Node) else False
def dot(self):
for c in self.children:
print('"{}" -> "{}"'.format(self.label, c.label))
def __repr__(self):
return "Node({},{})".format(self.match, self.children)
def non_graph_overlapping_matches(matches, lookup=None):
def overlaps(index_match, other_match):
return (index_match.index + index_match.length) > other_match.index
def in_bounds(i):
return i < len(matches)
def add_to_all_parents(parents, new_node):
for node in parents:
node.add_child(new_node)
def dfs(node, total_container, current_list):
if node.match:
current_list.append(node.match)
if node.is_leaf():
total_container.append(current_list[:])
for child in node.children:
dfs(child, total_container, current_list)
if node.match:
current_list.pop()
if lookup is None:
lookup = {}
root = Node(None,"Root")
parents = [root]
for i, match in enumerate(matches):
node = lookup.get(match, None)
if not node:
node = Node(match, str((match.index,match.length)))
add_to_all_parents(parents, node)
else:
parents = node.parents
lookup[match] = node
scan_index = i+1
while in_bounds(scan_index) and overlaps(matches[i], matches[scan_index]):
scan_index += 1
if in_bounds(scan_index):
child_node = lookup.get(matches[scan_index], Node(matches[scan_index], str((matches[scan_index].index, matches[scan_index].length))) )
node.add_child(child_node)
lookup[matches[scan_index]] = child_node
siblings = []
forward_scan = scan_index + 1
while in_bounds(forward_scan) and overlaps(matches[scan_index], matches[forward_scan]):
siblings.append((forward_scan, matches[forward_scan]))
forward_scan += 1
for sni, sibling_match in siblings:
sibling_node = lookup.get(sibling_match, Node(sibling_match, str((sibling_match.index,sibling_match.length))) )
node.add_child(sibling_node)
lookup[sibling_match] = sibling_node
out = []
current = []
for k in lookup:
lookup[k].dot()
root.dot()
dfs(root, out, current)
return out
The algorithim is
For each match:
If there is already a node for the match in the graph then make it's parents the target for adding new childnodes. Otherwise create the node and add it to all the nodes
Scan forward in the array until you get to the first element that does not overlap the match.
Make a new Node of for the first non-overlapping element or find the already existing one. Add it as a child to the match node from the first step
Finally perform a standard a depth first traversal of the graph to construct all possible lists of matches.
The .dot() method on the node objects is for producing a dot format graph representation which I can visualise using http://www.webgraphviz.com/
Oh, ffs, there's such an obvious optimisation I can do for the depth first traversal I feel dumb for not spotting it immediately. Every node can record the results of traversing the child nodes so he second time it is accessed you can just immediately return rather than traversing the child nodes again
Stupid idiot.
EDIT: Curses, using Phyphor's test case {0, 10}, {2, 5}, {3, 6}, {5, 8}, {7, 11}, {10, 15} I am not generating all the possible set with the above code. Algorithm Bug hunting time.
EDIT EDIT: Yup, I have a dumb obvious bug in my algorithm.
gavindelThe reason all your softwareis brokenRegistered Userregular
Looking at nonograms, the first thing I hear is my computing professor's voice. She whispers from the depths of time, "First, let us construct a nondeterministic Turing machine..."
which allows things to provide storage for blah to use, and allows for multiple different instances to use different storages
However, can I make that work with:
struct foo : public some_struct<foo> {};
or: can you construct a pointer-to-member to the implicit parent object?
As Ecco stated this is possible, and the pattern is called CRTP. It does have some restrictions on what you can do in the base class so I would do some more research
My bad, ignore the CRTP portion of it. This is a simplified version of what I'm trying to do
struct a {};
struct b { a foo; };
struct c : public a {};
template<T, a T::*> struct thing;
thing<b, &b::foo> this_works;
thing<c, ???> this_who_knows;
however I think I have to create a second thing template to handle it
So I think the worst case for my algo is exponential time as the graph traversal gets super hairy when there's lots of branches.
Best case is 2n.
I don't think you can reduce the worst case really because the worst case looks something like:
(0,2) (1,2)
(3,5) (4,5)
(7,8) (8,9)
I've broken the groups up, each pair is effectively a bit so you get 2^3 groups
True, but that's not taking into account the one weird trick that exponential algorithms hate.
When depth first traversing the resultant graph I can, for any given node, memoize the result of visiting it's child nodes so when the traversal revisits this node it can just take the memoized result rather than traverse the sub tree.
Boom, linear time traversal *cough* just don't ask me about memory bounds*cough*
for icd10code in ICD10Codes.objects.all():
if '.' not in icd10code.code:
updated_code = icd10code.code[:3] + "." + icd10code.code[3:]
icd10code.code = updated_code
icd10code.save()
The model looks like:
class ICD10Codes(models.Model):
code = models.TextField(null=False, primary_key=True)
description = models.TextField(null=False)
class Meta:
db_table = "icd10_codes"
And when I execute the code on the top it duplicates the rows. I start with ~60K rows, and when it's done I end up with 120K rows. I'm not quite sure what's going on... I've done this sort of update before.
So I think the worst case for my algo is exponential time as the graph traversal gets super hairy when there's lots of branches.
Best case is 2n.
I don't think you can reduce the worst case really because the worst case looks something like:
(0,2) (1,2)
(3,5) (4,5)
(7,8) (8,9)
I've broken the groups up, each pair is effectively a bit so you get 2^3 groups
True, but that's not taking into account the one weird trick that exponential algorithms hate.
When depth first traversing the resultant graph I can, for any given node, memoize the result of visiting it's child nodes so when the traversal revisits this node it can just take the memoized result rather than traverse the sub tree.
Boom, linear time traversal *cough* just don't ask me about memory bounds*cough*
Producing the results requires that you iterate minimally once per result though, even if deriving those results is faster
Even if you fully memoized the 8 results that can produce, adding another bit in front requires you to produce 16 results, then 32 and so on
Unless you are happy with the compact form of course, but I'm not sure how useful it is
I'm liking .NET Core a lot so far. Writing code in C# and having it run behind nginx is the dream and Kestrel is nice and lightweight if I just have some small internal thing that's running in a container or two.
So I think the worst case for my algo is exponential time as the graph traversal gets super hairy when there's lots of branches.
Best case is 2n.
I don't think you can reduce the worst case really because the worst case looks something like:
(0,2) (1,2)
(3,5) (4,5)
(7,8) (8,9)
I've broken the groups up, each pair is effectively a bit so you get 2^3 groups
True, but that's not taking into account the one weird trick that exponential algorithms hate.
When depth first traversing the resultant graph I can, for any given node, memoize the result of visiting it's child nodes so when the traversal revisits this node it can just take the memoized result rather than traverse the sub tree.
Boom, linear time traversal *cough* just don't ask me about memory bounds*cough*
Producing the results requires that you iterate minimally once per result though, even if deriving those results is faster
Even if you fully memoized the 8 results that can produce, adding another bit in front requires you to produce 16 results, then 32 and so on
Unless you are happy with the compact form of course, but I'm not sure how useful it is
Oh you and your facts. You can prove anything with facts.
@urahonky I'm guessing that your db middleware there for .save() is doing an insert/update on the key, and that your key is actually derived from the code field?
So when you change the code, it upserts on a new key and inserts instead of updates.
@urahonky I'm guessing that your db middleware there for .save() is doing an insert/update on the key, and that your key is actually derived from the code field?
So when you change the code, it upserts on a new key and inserts instead of updates.
@Infidel You are correct! I was dumb and made the code field the primary key. And since I'm updating the field to a new one it generates a brand new field.
Haha. We are going to have the new intern work on some test applications for the CUDA libraries I'm writing.
This is not going to end well.
Start them off easy - build some confidence. Test around the library first.
Test one: "Yup, the source control server can be contacted and the library exists."
Test two: "Yup, the library compiles."
Test three: "Yup, the test server has the hardware necessary to do run CUDA operations."
And then when they think they've got it, drop them in the deep end? =P
Anyone have suggestions for MongoDB-level auditing of operations (modification queries by user/time) that doesn't involve Enterprise edition?
Also, feedback/recommendations on CDN services that work nice with nginx/express/React? I've had Cloudflare in mind for a while. Something that helps with loading times and bandwidth, and leaving it to mostly be API calls only on our servers.
0
Options
KakodaimonosCode fondlerHelping the 1% get richerRegistered Userregular
Haha. We are going to have the new intern work on some test applications for the CUDA libraries I'm writing.
This is not going to end well.
Start them off easy - build some confidence. Test around the library first.
Test one: "Yup, the source control server can be contacted and the library exists."
Test two: "Yup, the library compiles."
Test three: "Yup, the test server has the hardware necessary to do run CUDA operations."
And then when they think they've got it, drop them in the deep end? =P
It's too bad that we don't have to deal with near and far pointers anymore. But the CUDA memory space is close.
for key, value in data.items():
if key == 'start_date' and value and isinstance(value, str) and value.strip() != "":
date = datetime.strptime(value, settings.DATE_FORMAT)
setattr(acct, key, date)
else:
setattr(acct, key, value)
acct.save()
Any Django/Python people available? There's GOT to be a better way for me to update a model than this. I know of:
Anyone have suggestions for MongoDB-level auditing of operations (modification queries by user/time) that doesn't involve Enterprise edition?
You are scaring me, man. Are you using MongoDB for information that matters? I've been involved in two projects where I saw with my own eyes MongoDB replicas lose data (both were pre 3.0 days, but still...)
Also, feedback/recommendations on CDN services that work nice with nginx/express/React? I've had Cloudflare in mind for a while. Something that helps with loading times and bandwidth, and leaving it to mostly be API calls only on our servers.
Cloudflare is great, but if you are not going to serve static resources with nginx anyway, just ditch it and replace with haproxy. You'll be a happy bunny if you ever hit a scalability/performance issue that you need to debug.
Also, as far as cdn, if you are already running on aws, cloudfront w/wo S3 is...ok.
Anyone have suggestions for MongoDB-level auditing of operations (modification queries by user/time) that doesn't involve Enterprise edition?
You are scaring me, man. Are you using MongoDB for information that matters? I've been involved in two projects where I saw with my own eyes MongoDB replicas lose data (both were pre 3.0 days, but still...)
Also, feedback/recommendations on CDN services that work nice with nginx/express/React? I've had Cloudflare in mind for a while. Something that helps with loading times and bandwidth, and leaving it to mostly be API calls only on our servers.
Cloudflare is great, but if you are not going to serve static resources with nginx anyway, just ditch it and replace with haproxy. You'll be a happy bunny if you ever hit a scalability/performance issue that you need to debug.
Also, as far as cdn, if you are already running on aws, cloudfront w/wo S3 is...ok.
Will be using nginx to cache/serve statics and such. Mostly it's that I'm familiar with it.
Yeah, that's not what the quoted text says at all.
0
Options
Mr_Rose83 Blue Ridge Protects the HolyRegistered Userregular
Yeah, article seems like click bait FUD; to get your license revoked, you have to initiate a software patent infringement suit against Facebook. It even goes on to say you're cool if Facebook sues you and you countersue.
Posts
And now
I must find interesting challenges
Edit:
Oh oh oh oh
I know I know
Have you guys played Picross (or one of its derivatives) before? They're also known as nonograms.
They're logic puzzles, similar to Sudoku.
Maybe we should start there?
Or maybe just start on Sudoku?
Edit2: Or interesting matchstick puzzles. Like "here's n squares, move x matchsticks to form m squares."
Oh, but you can brute force that.
Once cool thing is that you could actually decompose a large bitmap into a picross, though I'm not sure if it will work for any bitmap or if it needs to be constrained
You have to design it well, though, because you can give hints that result in an ambiguous nonogram.
For example:
It is obvious that the only solution is
However, what about:
There are... 2 possible solutions:
So yeah, you do have to provide a few constraints, otherwise you could end up with ambiguous nonograms.
You can't decompose any bitmap as you can end up with ambiguous clues.
I wrote a picross solver several years ago. It only did first level logic though (map all possible combinations for a row, find the guaranteed solids/gaps, then repeat with any columns that intersect the new information. Iterate that until nothing more can be worked out) so plenty of scope to improve it.
I made a game, it has penguins in it. It's pay what you like on Gumroad.
Currently Ebaying Nothing at all but I might do in the future.
Python code in the spoiler
The algorithim is
For each match:
If there is already a node for the match in the graph then make it's parents the target for adding new childnodes. Otherwise create the node and add it to all the nodes
Scan forward in the array until you get to the first element that does not overlap the match.
Make a new Node of for the first non-overlapping element or find the already existing one. Add it as a child to the match node from the first step
Finally perform a standard a depth first traversal of the graph to construct all possible lists of matches.
The .dot() method on the node objects is for producing a dot format graph representation which I can visualise using http://www.webgraphviz.com/
I made a game, it has penguins in it. It's pay what you like on Gumroad.
Currently Ebaying Nothing at all but I might do in the future.
Best case is 2n.
I made a game, it has penguins in it. It's pay what you like on Gumroad.
Currently Ebaying Nothing at all but I might do in the future.
I don't even need models anymore.
I can just do it server side and toss JSON to the client, and deserialize it into a dynamic object
Stupid idiot.
EDIT: Curses, using Phyphor's test case {0, 10}, {2, 5}, {3, 6}, {5, 8}, {7, 11}, {10, 15} I am not generating all the possible set with the above code. Algorithm Bug hunting time.
EDIT EDIT: Yup, I have a dumb obvious bug in my algorithm.
EDIT EDIT EDIT: I think I've fixed it.
I made a game, it has penguins in it. It's pay what you like on Gumroad.
Currently Ebaying Nothing at all but I might do in the future.
I have a template which takes a pointer to a member like so:
which is intended to be used like
which allows things to provide storage for blah to use, and allows for multiple different instances to use different storages
However, can I make that work with:
struct foo : public some_struct<foo> {};
or: can you construct a pointer-to-member to the implicit parent object?
I don't think you can reduce the worst case really because the worst case looks something like:
(0,2) (1,2)
(3,5) (4,5)
(7,8) (8,9)
I've broken the groups up, each pair is effectively a bit so you get 2^3 groups
I would expect so?
I think Window's ATL framework does something similar.
As Ecco stated this is possible, and the pattern is called CRTP. It does have some restrictions on what you can do in the base class so I would do some more research
however I think I have to create a second thing template to handle it
True, but that's not taking into account the one weird trick that exponential algorithms hate.
When depth first traversing the resultant graph I can, for any given node, memoize the result of visiting it's child nodes so when the traversal revisits this node it can just take the memoized result rather than traverse the sub tree.
Boom, linear time traversal *cough* just don't ask me about memory bounds*cough*
I made a game, it has penguins in it. It's pay what you like on Gumroad.
Currently Ebaying Nothing at all but I might do in the future.
I have a bit of code:
The model looks like:
And when I execute the code on the top it duplicates the rows. I start with ~60K rows, and when it's done I end up with 120K rows. I'm not quite sure what's going on... I've done this sort of update before.
Producing the results requires that you iterate minimally once per result though, even if deriving those results is faster
Even if you fully memoized the 8 results that can produce, adding another bit in front requires you to produce 16 results, then 32 and so on
Unless you are happy with the compact form of course, but I'm not sure how useful it is
Right?
the dream!
Still happens even with this code. What the fuck is going on?
This worked. I hate this, but it worked.
Oh you and your facts. You can prove anything with facts.
I made a game, it has penguins in it. It's pay what you like on Gumroad.
Currently Ebaying Nothing at all but I might do in the future.
So when you change the code, it upserts on a new key and inserts instead of updates.
@Infidel You are correct! I was dumb and made the code field the primary key. And since I'm updating the field to a new one it generates a brand new field.
2:15 for the juicy bits.
https://www.youtube.com/watch?v=Qx-ISmfOF4g
https://drive.google.com/file/d/0B8Z4Yige07tBR00yWkJQWm1xTDA/view
startscript.bat
it's clunky yet. Expect further refinements to start coming down the tubes after the AGI conference.
This is not going to end well.
Start them off easy - build some confidence. Test around the library first.
Test one: "Yup, the source control server can be contacted and the library exists."
Test two: "Yup, the library compiles."
Test three: "Yup, the test server has the hardware necessary to do run CUDA operations."
And then when they think they've got it, drop them in the deep end? =P
Also, feedback/recommendations on CDN services that work nice with nginx/express/React? I've had Cloudflare in mind for a while. Something that helps with loading times and bandwidth, and leaving it to mostly be API calls only on our servers.
It's too bad that we don't have to deal with near and far pointers anymore. But the CUDA memory space is close.
Any Django/Python people available? There's GOT to be a better way for me to update a model than this. I know of:
However that does NOT run any of the overriden save methods, which is really bad.
You are scaring me, man. Are you using MongoDB for information that matters? I've been involved in two projects where I saw with my own eyes MongoDB replicas lose data (both were pre 3.0 days, but still...)
Cloudflare is great, but if you are not going to serve static resources with nginx anyway, just ditch it and replace with haproxy. You'll be a happy bunny if you ever hit a scalability/performance issue that you need to debug.
Also, as far as cdn, if you are already running on aws, cloudfront w/wo S3 is...ok.
Will be using nginx to cache/serve statics and such. Mostly it's that I'm familiar with it.
We are running a "private cloud" colocation.
@Infidel : Roll with Cloudflare then, man. Ease of use is a nice benefit.
Edit: Wtf wanted to type Cloudflare and it came out as "Cloudfront". I am an aws shill.
Nintendo Network ID: AzraelRose
DropBox invite link - get 500MB extra free.