As was foretold, we've added advertisements to the forums! If you have questions, or if you encounter any bugs, please visit this thread: https://forums.penny-arcade.com/discussion/240191/forum-advertisement-faq-and-reports-thread/
Options

[Programming] Kafkaesque rabbits in the queue at the pub

12357100

Posts

  • Options
    ecco the dolphinecco the dolphin Registered User regular
    edited July 2016
    OH HO HO HO

    And now

    I must find interesting challenges

    Edit:
    Oh oh oh oh

    I know I know

    Have you guys played Picross (or one of its derivatives) before? They're also known as nonograms.

    They're logic puzzles, similar to Sudoku.

    Maybe we should start there?

    Or maybe just start on Sudoku?

    Edit2: Or interesting matchstick puzzles. Like "here's n squares, move x matchsticks to form m squares."

    Oh, but you can brute force that.

    ecco the dolphin on
    Penny Arcade Developers at PADev.net.
  • Options
    PhyphorPhyphor Building Planet Busters Tasting FruitRegistered User regular
    I have not heard of those, no. I wonder if they would be easier or harder to solve than sudokus?

    Once cool thing is that you could actually decompose a large bitmap into a picross, though I'm not sure if it will work for any bitmap or if it needs to be constrained

  • Options
    ecco the dolphinecco the dolphin Registered User regular
    edited July 2016
    I find them harder than Sudokus, to be honest. Sudokus are constrained at 9x9, but I've had 15x15 and 20x15 nonograms that stump me.

    You have to design it well, though, because you can give hints that result in an ambiguous nonogram.

    For example:
       100
      +---+
     1|...|
     0|...|
     0|...|
      +---+
    

    It is obvious that the only solution is
       100
      +---+
     1|x..|
     0|...|
     0|...|
      +---+
    

    However, what about:
       101
      +---+
     1|...|
     0|...|
     1|...|
      +---+
    

    There are... 2 possible solutions:
       101
      +---+
     1|x..|
     0|...|
     1|..x|
      +---+
    
       101
      +---+
     1|..x|
     0|...|
     1|x..|
      +---+
    

    So yeah, you do have to provide a few constraints, otherwise you could end up with ambiguous nonograms.

    ecco the dolphin on
    Penny Arcade Developers at PADev.net.
  • Options
    Alistair HuttonAlistair Hutton Dr EdinburghRegistered User regular
    Phyphor wrote: »
    I have not heard of those, no. I wonder if they would be easier or harder to solve than sudokus?

    Once cool thing is that you could actually decompose a large bitmap into a picross, though I'm not sure if it will work for any bitmap or if it needs to be constrained

    You can't decompose any bitmap as you can end up with ambiguous clues.

    I wrote a picross solver several years ago. It only did first level logic though (map all possible combinations for a row, find the guaranteed solids/gaps, then repeat with any columns that intersect the new information. Iterate that until nothing more can be worked out) so plenty of scope to improve it.

    I have a thoughtful and infrequently updated blog about games http://whatithinkaboutwhenithinkaboutgames.wordpress.com/

    I made a game, it has penguins in it. It's pay what you like on Gumroad.

    Currently Ebaying Nothing at all but I might do in the future.
  • Options
    Alistair HuttonAlistair Hutton Dr EdinburghRegistered User regular
    edited July 2016
    So my naive "generate-all-the-things-then--filter-it-down" approach to the problem takes a hilarious 10 seconds for a 15 item list (and it can degenerate from there to 30 seconds plus). On the same list the graph construction approach takes 0 seconds. And I know there is at least one major improvement I can make.

    Python code in the spoiler
    class Node:
        def __init__(self, match, label):
            self.parents = []
            self.match = match
            self.children = []
            self.label = label
    
        def add_child(self, node):
            self.children.append(node)
            node.parents.append(self)
    
        def is_leaf(self):
            return len(self.children) == 0
    
        def __eq__(self, other):
            return self.match == other.match \
                and self.parents == other.parents \
                and self.children == other.children \
                if isinstance(other, Node) else False
    
        def dot(self):
            for c in self.children:            
                print('"{}" -> "{}"'.format(self.label, c.label))
    
        def __repr__(self):
            return "Node({},{})".format(self.match, self.children)
    
    def non_graph_overlapping_matches(matches, lookup=None):
        def overlaps(index_match, other_match):
            return (index_match.index + index_match.length) > other_match.index
    
        def in_bounds(i):
            return i < len(matches)
    
        def add_to_all_parents(parents, new_node):
            for node in parents:
                node.add_child(new_node)
    
        def dfs(node, total_container, current_list):
            if node.match:
                current_list.append(node.match)
    
            if node.is_leaf():
                total_container.append(current_list[:])
    
            for child in node.children:
                dfs(child, total_container, current_list)
    
            if node.match:
                current_list.pop()
    
        if lookup is None:
            lookup = {}
        root = Node(None,"Root")
        parents = [root]
        for i, match in enumerate(matches):
    
            node = lookup.get(match, None)
            if not node:
                node = Node(match, str((match.index,match.length)))
                add_to_all_parents(parents, node)
            else:
                parents = node.parents
            lookup[match] = node
    
            scan_index = i+1
    
            while in_bounds(scan_index) and overlaps(matches[i], matches[scan_index]):
                scan_index += 1
    
            if in_bounds(scan_index):
                child_node = lookup.get(matches[scan_index], Node(matches[scan_index], str((matches[scan_index].index, matches[scan_index].length))) )
                node.add_child(child_node)
                lookup[matches[scan_index]] = child_node
                siblings = []
                forward_scan = scan_index + 1
                while in_bounds(forward_scan) and overlaps(matches[scan_index], matches[forward_scan]):
                    siblings.append((forward_scan, matches[forward_scan]))
                    forward_scan += 1
                for sni, sibling_match in siblings:
                    sibling_node = lookup.get(sibling_match, Node(sibling_match, str((sibling_match.index,sibling_match.length))) )
                    node.add_child(sibling_node)
                    lookup[sibling_match] = sibling_node
    
        out = []
        current = []
        for k in lookup:
            lookup[k].dot()
        root.dot()
        dfs(root, out, current)
        return out
    
    

    The algorithim is
    For each match:
    If there is already a node for the match in the graph then make it's parents the target for adding new childnodes. Otherwise create the node and add it to all the nodes
    Scan forward in the array until you get to the first element that does not overlap the match.
    Make a new Node of for the first non-overlapping element or find the already existing one. Add it as a child to the match node from the first step

    Finally perform a standard a depth first traversal of the graph to construct all possible lists of matches.

    The .dot() method on the node objects is for producing a dot format graph representation which I can visualise using http://www.webgraphviz.com/

    Alistair Hutton on
    I have a thoughtful and infrequently updated blog about games http://whatithinkaboutwhenithinkaboutgames.wordpress.com/

    I made a game, it has penguins in it. It's pay what you like on Gumroad.

    Currently Ebaying Nothing at all but I might do in the future.
  • Options
    Alistair HuttonAlistair Hutton Dr EdinburghRegistered User regular
    edited July 2016
    So I think the worst case for my algo is exponential time as the graph traversal gets super hairy when there's lots of branches.

    Best case is 2n.

    Alistair Hutton on
    I have a thoughtful and infrequently updated blog about games http://whatithinkaboutwhenithinkaboutgames.wordpress.com/

    I made a game, it has penguins in it. It's pay what you like on Gumroad.

    Currently Ebaying Nothing at all but I might do in the future.
  • Options
    bowenbowen How you doin'? Registered User regular
    I'm really enjoying Dynamic in C#.

    I don't even need models anymore.

    I can just do it server side and toss JSON to the client, and deserialize it into a dynamic object

    not a doctor, not a lawyer, examples I use may not be fully researched so don't take out of context plz, don't @ me
  • Options
    Alistair HuttonAlistair Hutton Dr EdinburghRegistered User regular
    edited July 2016
    Oh, ffs, there's such an obvious optimisation I can do for the depth first traversal I feel dumb for not spotting it immediately. Every node can record the results of traversing the child nodes so he second time it is accessed you can just immediately return rather than traversing the child nodes again

    Stupid idiot.

    EDIT: Curses, using Phyphor's test case {0, 10}, {2, 5}, {3, 6}, {5, 8}, {7, 11}, {10, 15} I am not generating all the possible set with the above code. Algorithm Bug hunting time.

    EDIT EDIT: Yup, I have a dumb obvious bug in my algorithm.

    EDIT EDIT EDIT: I think I've fixed it.

    Alistair Hutton on
    I have a thoughtful and infrequently updated blog about games http://whatithinkaboutwhenithinkaboutgames.wordpress.com/

    I made a game, it has penguins in it. It's pay what you like on Gumroad.

    Currently Ebaying Nothing at all but I might do in the future.
  • Options
    gavindelgavindel The reason all your software is brokenRegistered User regular
    Looking at nonograms, the first thing I hear is my computing professor's voice. She whispers from the depths of time, "First, let us construct a nondeterministic Turing machine..."

    Book - Royal road - Free! Seraphim === TTRPG - Wuxia - Free! Seln Alora
  • Options
    PhyphorPhyphor Building Planet Busters Tasting FruitRegistered User regular
    Obscure C++ question:

    I have a template which takes a pointer to a member like so:
    template<class T> struct some_struct { ... };
    template<class T, typename some_struct<T>::type T::*ptr> struct thing { ... }
    

    which is intended to be used like
    struct foo { some_struct<foo> entry; };
    thing<foo, &foo::entry> blah;
    

    which allows things to provide storage for blah to use, and allows for multiple different instances to use different storages

    However, can I make that work with:
    struct foo : public some_struct<foo> {};

    or: can you construct a pointer-to-member to the implicit parent object?

  • Options
    PhyphorPhyphor Building Planet Busters Tasting FruitRegistered User regular
    So I think the worst case for my algo is exponential time as the graph traversal gets super hairy when there's lots of branches.

    Best case is 2n.

    I don't think you can reduce the worst case really because the worst case looks something like:

    (0,2) (1,2)
    (3,5) (4,5)
    (7,8) (8,9)

    I've broken the groups up, each pair is effectively a bit so you get 2^3 groups

  • Options
    ecco the dolphinecco the dolphin Registered User regular
    Phyphor wrote: »
    Obscure C++ question:

    I have a template which takes a pointer to a member like so:
    template<class T> struct some_struct { ... };
    template<class T, typename some_struct<T>::type T::*ptr> struct thing { ... }
    

    which is intended to be used like
    struct foo { some_struct<foo> entry; };
    thing<foo, &foo::entry> blah;
    

    which allows things to provide storage for blah to use, and allows for multiple different instances to use different storages

    However, can I make that work with:
    struct foo : public some_struct<foo> {};

    or: can you construct a pointer-to-member to the implicit parent object?

    I would expect so?

    I think Window's ATL framework does something similar.

    Penny Arcade Developers at PADev.net.
  • Options
    EtheaEthea Registered User regular
    Phyphor wrote: »
    Obscure C++ question:

    I have a template which takes a pointer to a member like so:
    template<class T> struct some_struct { ... };
    template<class T, typename some_struct<T>::type T::*ptr> struct thing { ... }
    

    which is intended to be used like
    struct foo { some_struct<foo> entry; };
    thing<foo, &foo::entry> blah;
    

    which allows things to provide storage for blah to use, and allows for multiple different instances to use different storages

    However, can I make that work with:
    struct foo : public some_struct<foo> {};

    or: can you construct a pointer-to-member to the implicit parent object?

    As Ecco stated this is possible, and the pattern is called CRTP. It does have some restrictions on what you can do in the base class so I would do some more research

  • Options
    PhyphorPhyphor Building Planet Busters Tasting FruitRegistered User regular
    My bad, ignore the CRTP portion of it. This is a simplified version of what I'm trying to do
    struct a {};
    struct b { a foo; };
    struct c : public a {};
    
    template<T, a T::*> struct thing;
    thing<b, &b::foo> this_works;
    thing<c, ???> this_who_knows;
    

    however I think I have to create a second thing template to handle it

  • Options
    Alistair HuttonAlistair Hutton Dr EdinburghRegistered User regular
    Phyphor wrote: »
    So I think the worst case for my algo is exponential time as the graph traversal gets super hairy when there's lots of branches.

    Best case is 2n.

    I don't think you can reduce the worst case really because the worst case looks something like:

    (0,2) (1,2)
    (3,5) (4,5)
    (7,8) (8,9)

    I've broken the groups up, each pair is effectively a bit so you get 2^3 groups

    True, but that's not taking into account the one weird trick that exponential algorithms hate.

    When depth first traversing the resultant graph I can, for any given node, memoize the result of visiting it's child nodes so when the traversal revisits this node it can just take the memoized result rather than traverse the sub tree.

    Boom, linear time traversal *cough* just don't ask me about memory bounds*cough*

    I have a thoughtful and infrequently updated blog about games http://whatithinkaboutwhenithinkaboutgames.wordpress.com/

    I made a game, it has penguins in it. It's pay what you like on Gumroad.

    Currently Ebaying Nothing at all but I might do in the future.
  • Options
    urahonkyurahonky Resident FF7R hater Registered User regular
    Can any Django/Python people help me out here?

    I have a bit of code:
            for icd10code in ICD10Codes.objects.all():
                if '.' not in icd10code.code:
                    updated_code = icd10code.code[:3] + "." + icd10code.code[3:]
                    icd10code.code = updated_code
                    icd10code.save()
    

    The model looks like:
    class ICD10Codes(models.Model):
        code = models.TextField(null=False, primary_key=True)
        description = models.TextField(null=False)
    
        class Meta:
            db_table = "icd10_codes"
    

    And when I execute the code on the top it duplicates the rows. I start with ~60K rows, and when it's done I end up with 120K rows. I'm not quite sure what's going on... I've done this sort of update before.

  • Options
    PhyphorPhyphor Building Planet Busters Tasting FruitRegistered User regular
    Phyphor wrote: »
    So I think the worst case for my algo is exponential time as the graph traversal gets super hairy when there's lots of branches.

    Best case is 2n.

    I don't think you can reduce the worst case really because the worst case looks something like:

    (0,2) (1,2)
    (3,5) (4,5)
    (7,8) (8,9)

    I've broken the groups up, each pair is effectively a bit so you get 2^3 groups

    True, but that's not taking into account the one weird trick that exponential algorithms hate.

    When depth first traversing the resultant graph I can, for any given node, memoize the result of visiting it's child nodes so when the traversal revisits this node it can just take the memoized result rather than traverse the sub tree.

    Boom, linear time traversal *cough* just don't ask me about memory bounds*cough*

    Producing the results requires that you iterate minimally once per result though, even if deriving those results is faster

    Even if you fully memoized the 8 results that can produce, adding another bit in front requires you to produce 16 results, then 32 and so on

    Unless you are happy with the compact form of course, but I'm not sure how useful it is

  • Options
    WaltWalt Waller Arcane Enchanted Frozen ElectrifiedRegistered User regular
    I'm liking .NET Core a lot so far. Writing code in C# and having it run behind nginx is the dream and Kestrel is nice and lightweight if I just have some small internal thing that's running in a container or two.
    bowen wrote: »
    I'm really enjoying Dynamic in C#.

    I don't even need models anymore.

    I can just do it server side and toss JSON to the client, and deserialize it into a dynamic object
    Right?
    the dream!

  • Options
    urahonkyurahonky Resident FF7R hater Registered User regular
    edited July 2016
    DOUBLE POAST

    urahonky on
  • Options
    urahonkyurahonky Resident FF7R hater Registered User regular
            for icd10code in ICD10Codes.objects.exclude(code__contains='.'):
                updated_code = icd10code.code[:3] + "." + icd10code.code[3:]
                icd10code.code = updated_code
                icd10code.save()
    

    Still happens even with this code. What the fuck is going on?

  • Options
    urahonkyurahonky Resident FF7R hater Registered User regular
            for icd10code in ICD10Codes.objects.exclude(code__contains='.'):
                updated_code = icd10code.code[:3] + "." + icd10code.code[3:]
                code_dict = {
                    'code': updated_code,
                    'description': icd10code.description
                }
                ICD10Codes.objects.filter(pk=icd10code.pk).update(**code_dict)
    

    This worked. I hate this, but it worked.

  • Options
    bowenbowen How you doin'? Registered User regular
    so icd10code is immutable or something?

    not a doctor, not a lawyer, examples I use may not be fully researched so don't take out of context plz, don't @ me
  • Options
    urahonkyurahonky Resident FF7R hater Registered User regular
    It shouldn't be. It's a standard model. Either way I found out that I can't even update it anyway since the table is in use in another table.

  • Options
    Alistair HuttonAlistair Hutton Dr EdinburghRegistered User regular
    Phyphor wrote: »
    Phyphor wrote: »
    So I think the worst case for my algo is exponential time as the graph traversal gets super hairy when there's lots of branches.

    Best case is 2n.

    I don't think you can reduce the worst case really because the worst case looks something like:

    (0,2) (1,2)
    (3,5) (4,5)
    (7,8) (8,9)

    I've broken the groups up, each pair is effectively a bit so you get 2^3 groups

    True, but that's not taking into account the one weird trick that exponential algorithms hate.

    When depth first traversing the resultant graph I can, for any given node, memoize the result of visiting it's child nodes so when the traversal revisits this node it can just take the memoized result rather than traverse the sub tree.

    Boom, linear time traversal *cough* just don't ask me about memory bounds*cough*

    Producing the results requires that you iterate minimally once per result though, even if deriving those results is faster

    Even if you fully memoized the 8 results that can produce, adding another bit in front requires you to produce 16 results, then 32 and so on

    Unless you are happy with the compact form of course, but I'm not sure how useful it is

    Oh you and your facts. You can prove anything with facts.

    I have a thoughtful and infrequently updated blog about games http://whatithinkaboutwhenithinkaboutgames.wordpress.com/

    I made a game, it has penguins in it. It's pay what you like on Gumroad.

    Currently Ebaying Nothing at all but I might do in the future.
  • Options
    InfidelInfidel Heretic Registered User regular
    @urahonky I'm guessing that your db middleware there for .save() is doing an insert/update on the key, and that your key is actually derived from the code field?

    So when you change the code, it upserts on a new key and inserts instead of updates.

    OrokosPA.png
  • Options
    urahonkyurahonky Resident FF7R hater Registered User regular
    Infidel wrote: »
    @urahonky I'm guessing that your db middleware there for .save() is doing an insert/update on the key, and that your key is actually derived from the code field?

    So when you change the code, it upserts on a new key and inserts instead of updates.

    @Infidel You are correct! I was dumb and made the code field the primary key. And since I'm updating the field to a new one it generates a brand new field.

  • Options
    EchoEcho ski-bap ba-dapModerator mod
    So this is pretty awesome - by analyzing how an artist draws, colors and shades a sphere, they can apply that style to any given 3D model.

    2:15 for the juicy bits.

    https://www.youtube.com/watch?v=Qx-ISmfOF4g

  • Options
    TraceTrace GNU Terry Pratchett; GNU Gus; GNU Carrie Fisher; GNU Adam We Registered User regular
    NARS 2.0 all bundled up in a single double click for you homeboys.

    https://drive.google.com/file/d/0B8Z4Yige07tBR00yWkJQWm1xTDA/view


    startscript.bat


    it's clunky yet. Expect further refinements to start coming down the tubes after the AGI conference.

  • Options
    KakodaimonosKakodaimonos Code fondler Helping the 1% get richerRegistered User regular
    Haha. We are going to have the new intern work on some test applications for the CUDA libraries I'm writing.

    This is not going to end well.

  • Options
    ecco the dolphinecco the dolphin Registered User regular
    Haha. We are going to have the new intern work on some test applications for the CUDA libraries I'm writing.

    This is not going to end well.

    Start them off easy - build some confidence. Test around the library first.

    Test one: "Yup, the source control server can be contacted and the library exists."
    Test two: "Yup, the library compiles."
    Test three: "Yup, the test server has the hardware necessary to do run CUDA operations."

    And then when they think they've got it, drop them in the deep end? =P

    Penny Arcade Developers at PADev.net.
  • Options
    InfidelInfidel Heretic Registered User regular
    Anyone have suggestions for MongoDB-level auditing of operations (modification queries by user/time) that doesn't involve Enterprise edition?

    Also, feedback/recommendations on CDN services that work nice with nginx/express/React? I've had Cloudflare in mind for a while. Something that helps with loading times and bandwidth, and leaving it to mostly be API calls only on our servers.

    OrokosPA.png
  • Options
    KakodaimonosKakodaimonos Code fondler Helping the 1% get richerRegistered User regular
    Haha. We are going to have the new intern work on some test applications for the CUDA libraries I'm writing.

    This is not going to end well.

    Start them off easy - build some confidence. Test around the library first.

    Test one: "Yup, the source control server can be contacted and the library exists."
    Test two: "Yup, the library compiles."
    Test three: "Yup, the test server has the hardware necessary to do run CUDA operations."

    And then when they think they've got it, drop them in the deep end? =P

    It's too bad that we don't have to deal with near and far pointers anymore. But the CUDA memory space is close.

  • Options
    urahonkyurahonky Resident FF7R hater Registered User regular
            for key, value in data.items():
                if key == 'start_date' and value and isinstance(value, str) and value.strip() != "":
                    date = datetime.strptime(value, settings.DATE_FORMAT)
                    setattr(acct, key, date)
                else:
                    setattr(acct, key, value)
            acct.save()
    

    Any Django/Python people available? There's GOT to be a better way for me to update a model than this. I know of:
    Account.objects.filter(pk=account_id).update(**data)
    

    However that does NOT run any of the overriden save methods, which is really bad.

  • Options
    BarrakkethBarrakketh Registered User regular
    Is this useful? Your boilerplate looks like exactly what it says it is supposed to help with, but I'm not a Django person.

    Rollers are red, chargers are blue....omae wa mou shindeiru
  • Options
    zeenyzeeny Registered User regular
    Infidel wrote: »
    Anyone have suggestions for MongoDB-level auditing of operations (modification queries by user/time) that doesn't involve Enterprise edition?

    You are scaring me, man. Are you using MongoDB for information that matters? I've been involved in two projects where I saw with my own eyes MongoDB replicas lose data (both were pre 3.0 days, but still...)
    Also, feedback/recommendations on CDN services that work nice with nginx/express/React? I've had Cloudflare in mind for a while. Something that helps with loading times and bandwidth, and leaving it to mostly be API calls only on our servers.

    Cloudflare is great, but if you are not going to serve static resources with nginx anyway, just ditch it and replace with haproxy. You'll be a happy bunny if you ever hit a scalability/performance issue that you need to debug.
    Also, as far as cdn, if you are already running on aws, cloudfront w/wo S3 is...ok.

  • Options
    InfidelInfidel Heretic Registered User regular
    zeeny wrote: »
    Infidel wrote: »
    Anyone have suggestions for MongoDB-level auditing of operations (modification queries by user/time) that doesn't involve Enterprise edition?

    You are scaring me, man. Are you using MongoDB for information that matters? I've been involved in two projects where I saw with my own eyes MongoDB replicas lose data (both were pre 3.0 days, but still...)
    Also, feedback/recommendations on CDN services that work nice with nginx/express/React? I've had Cloudflare in mind for a while. Something that helps with loading times and bandwidth, and leaving it to mostly be API calls only on our servers.

    Cloudflare is great, but if you are not going to serve static resources with nginx anyway, just ditch it and replace with haproxy. You'll be a happy bunny if you ever hit a scalability/performance issue that you need to debug.
    Also, as far as cdn, if you are already running on aws, cloudfront w/wo S3 is...ok.

    Will be using nginx to cache/serve statics and such. Mostly it's that I'm familiar with it.

    We are running a "private cloud" colocation.

    OrokosPA.png
  • Options
    zeenyzeeny Registered User regular
    edited July 2016
    Huh? I am confused. Article's interpretation does not match the text of the termination clause at all.

    @Infidel : Roll with Cloudflare then, man. Ease of use is a nice benefit.

    Edit: Wtf wanted to type Cloudflare and it came out as "Cloudfront". I am an aws shill.

    zeeny on
  • Options
    LD50LD50 Registered User regular
    Yeah, that's not what the quoted text says at all.

  • Options
    Mr_RoseMr_Rose 83 Blue Ridge Protects the Holy Registered User regular
    Yeah, article seems like click bait FUD; to get your license revoked, you have to initiate a software patent infringement suit against Facebook. It even goes on to say you're cool if Facebook sues you and you countersue.

    ...because dragons are AWESOME! That's why.
    Nintendo Network ID: AzraelRose
    DropBox invite link - get 500MB extra free.
This discussion has been closed.