As was foretold, we've added advertisements to the forums! If you have questions, or if you encounter any bugs, please visit this thread: https://forums.penny-arcade.com/discussion/240191/forum-advertisement-faq-and-reports-thread/

[Game Dev] I don't have a publisher. What I do have are a very particular set of skills.

1676870727392

Posts

  • SurfpossumSurfpossum A nonentity trying to preserve the anonymity he so richly deserves.Registered User regular
    edited June 2020
    Peewi wrote: »
    That's neat and I messed around with it for a little bit.

    I had assumed the goal is to reduce it to 0, but I guess that's not allowed?
    I think the system of dragging through additional tiles to combine them is a little confusing and it took me a little while to understand. Maybe some sort of visualization would help.
    I tried dropping additional expressions on the green spaces and I didn't understand why it only worked with the first one. It was not clear to me that I had to drag to the tile in the middle of the green space. This becomes even less intuitive when you have an expression that extends beyond the borders of the tile.
    If you make a very long expression in the yellow area the text becomes tiny and difficult to read. If you make a very long expression elsewhere it extends beyond the screen.

    Subtraction appears to work in two different ways. Example: If I drag x to - and then to y in the yellow area, it gives "-x + y". If I drag x to - and then to a y tile it gives "x + -y".
    Yeah, there are definitely some things to work out. (Also, to be clear, this is still very much just the basic functionality of the thing, not anything close to finished.)

    There's not really a goal right now. The eventual plan is to gain points for reducing expressions to 0 and using points to buy upgrades (more bays, larger hand size, special tiles, etc.) as the expressions get harder.

    I'm not sure yet how I'm gonna handle very long expressions, so I haven't bothered adding the scaling to the tiles; maybe if the player keeps making expressions longer they'll start to take damage or something.

    The operations are gonna be a tricky thing to make intuitive, yeah. Normally it goes "tile you're dragging" + "operator" + "tile you dragged into," but for the target it goes "target" + "operator" + "tile you're dragging." I feel like that's the functionality I want, so I might try to add some kind of visual indicator when you drag stuff to the top that the operations are flipping (probably also gonna change the text on stuff being dragged back to showing all the operations happening instead of the planned result).

    Good call on dropping additional stuff into the green spaces; that's probably gonna get changed when I tear up all my code and rewrite it to standardize how objects are calling each other's functions instead of a mishmash of directly reaching into each other to modify variables.

    Also, while I'm posting, currently known bugs: division will sometimes "eat" tiles when it fails. I think this is due to the aforementioned spaghetti code of objects messing with each other.

    It all makes perfect sense:

    dkxeg5vgpxsl.jpeg

    Surfpossum on
  • CornucopiistCornucopiist Registered User regular
    So, the RPG moonshot is working pretty well, and aside from fixing a few issues with water, it's ready to start working on city development.
    However, I'm not entirely satisfied with my procedural generation. It depends far too much on storing height data and tile type, and revisiting these and changing them and storing them again.
    So I'm going to dive into another moonshot; repeatable procedural generation based on just the seed, fast enough to generate a big landscape on the edge of the map and detailed tiles as the user travels around.

    That will translate into things like 3D mazes for my Turbinehead game, but also cyberpunk and other sci-fi environments. These would be prohibitively heavy to store, and slow to generate in huge chunks.
    I have learned one major lesson in the last couple of weeks; split off your tech exploration into projects that do only one thing. So, we'll see?

    On the Endless Azure side I've set a release date and done nothing for the marketing. I need to talk to my marketing guy and see if he's till my marketing guy, as he has a *lot* on his plate, and he's doing me a favor.

    On the FNS side I've decided to build a vertical slice, then polish that into a demo for release one month after the Endless Azure release. Yesterday I added crashing:

    https://www.youtube.com/watch?v=zAxPgzGWleo

  • CornucopiistCornucopiist Registered User regular
    So here's my new landscape generator, completely running on Perlin noise. There are no towns or roads yet, and rivers tend to loop uphill, because those are the limits of perlin noise... I'll have to start looking into vectorial generation and fractals to get towns, roads and rivers properly working without storing any data (i.e. base solely on position).

    https://youtu.be/tjZJJ8dcczA

  • nervenerve Registered User regular
    I really like the look of the FNS game.

    Regarding your landscape generator, how are you choosing when to change terrain types (e.g. river, desert, etc.)?

  • KupiKupi Registered User regular
    Kupi's Weekly Friday Game Dev Status Report Incomprehensible Psychobabble

    The good news is that I finally put Xenoblade Chronicles to rest; the bad news is that I had an on-call rotation for my day job and then bought Alwa's Legacy in a sort of vain attempt to get my brain back in the pixelart platformer headspace; consequently, this week was rather heavier on the design work than the actual implementation, which is to say that I pretty much got no actual work done at all. You know what that means. My punishment (and yours) is to type up the fruits of that design work.

    So last time I wrote a small novel about the potential "collision resolution domains" approach to keeping collision volume movement coherent; essentially, the idea is to split the mass of collision volumes into distinct regions that provably have no interaction with one another, and then perform the timeline-based collision resolution (re-casting a volume when it's found to interact with a volume that pushes it) in parallel for each region. I had two open problems at the time, of which I only actually reported one: first, that I was having trouble determining by how much a resolution domain ought to expand when combining with another one, and second, that the concept of "collision volume layers" (the same concept as in Unity or Unreal) seriously complicated the problem of determining which resolution domains a combined resolution domain newly overlapped.

    After some fitful attempts at finding reasonable limits by experimentation in a spreadsheet, I have chosen to simply discard the prior method and start over from first principles.

    Part of why I've struggled to find a good way of parallelizing collision detection is that I've always been working in a model in which there is a single inviolate timeline within the frame. This butts up against the core principle of the ECS paradigm, which is that a system can process all of its inputs and transform them independently. When you hold two contradictory principles simultaneously, you get failure! That's just how it works. So, as much as the premise of an absolutely clean timeline is, I've decided to see if I can get a "good enough" collision system together that's built on the principle that you should be able to process most collision volumes independently. Because I am fond of giving things titles, I'm calling the new approach "the world of solipsistic bodies".

    Solipsism, as the assuredly cultured and knowledgable folk of the Penny-Arcade board surely need not be reminded, is the philosophical concept that we are truly alone in the universe-- that as the consciousness in our heads is the only one that can be fully proven to exist to ourselves, that all others are therefore, if not false, at least unproven. It is a lonely philosophy and I don't like it, but it describes what happens in the new approach well enough.

    ANYWAY. In the world of solipsistic bodies, every collision volume lives in its own private universe, observing and interacting only with the shadows of every other volume. So far as any given volume is concerned, every other volume starts at a single fixed position, moves with a single fixed velocity, has a particular push priority, and never changes any of those facts throughout the frame. The volume itself may be pushed by other volumes and bounce off in random directions, but any interaction not involving itself is irrelevant. From the jump you can see how that would enable parallelization, because there's no shared state between any of the volumes except the one declared at the beginning and not updated again until the end of the frame. And I think it's equally obvious how choosing to ignore the interactions between other volumes can produce problems.

    fmxsal3soklx.png

    In the above image, volume A moves downward at tremendous speed with a push priority of 1; volumes B and C move upward at a lower but equal velocity with a push priority of 2. During this frame, each volume performs its own tests against the shadows of the other volumes and determines the following:

    A: Collides with B and gets pushed, negating its velocity and moving upward.
    B: Collides with A and pushes it, continuing to move upward.
    C: Collides with A and pushes it, continuing to move upward.

    C has a record of an observed collision that it shouldn't have, because A gets pushed by B. That's clearly an issue, because if e.g. volume C is a crumble platform that breaks when it comes into contact with something, it'll break at the wrong time. To which I respond: one can easily work around that by having that logic driven by things more likely to be pushed. In other words, rather than having a component that triggers the platform/block/entity to crumble on contact with any body, that component would expose a data member that would be tested for and modified by a CausesToCrumbleOnImpact component on the character/bullet/more mobile entity, and the fact of a spurious collision being registered by C becomes irrelevant.

    More important is the problem of second-order effects.

    fblr96bhldjs.png

    In this example, volume A moves to the right, with a higher push priority than either B or C (which are both dynamic volumes, just with a zero velocity). The expected behavior is that on impact with B, B will gain the velocity and push priority of A, consequently colliding with and pushing volume C. However, note that A doesn't have enough velocity to impact C directly this frame. So when C tests for collisions, it detects no collision with A or B. At the end of the frame, B is left overlapping C. Depending the conditions, that overlap could be significant and the depenetration phase will emit a crush when the result should have been a normal push.

    The answer to this problem is, I think, to accept that some imprecision is built into the design of the system. We can't have both a stable timeline and parallel processing, and this design favors parallel processing. Moreover, it's possible to push this edge case further into the margins with the next major component of the design-- carrying push velocities and push priority promotion across frames.

    Here's the idea: any time a volume detects a push interaction in its private universe, it emits the fact of the push interaction, without actually updating the velocity or push priority of the other volume involved. At the end of the frame, all such push interactions reported are set-intersected and assigned to the volumes involved. So, in the most recent example, B would have a list with one interaction, stating that it is being pushed by volume A. At the start of the next frame (when the system is calculating the position and velocity of each volume to be used as the "shadows"), B's velocity is its "native" velocity (the one taken from its velocity component) plus A's native velocity, and its push priority is promoted to A's. Then, for each volume that was pushing the volume, the system performs another collision test between the two with the pushing volume's native velocity removed from the pushed velocity-- essentially, asking "if this volume didn't have the added velocity from being pushed, would these volumes collide again?" If not, then the pusher-pushee relationship is removed between the two and will cease to apply the next frame. Though this wouldn't stop volumes B and C from overlapping one another on the first frame, on every subsequent frame, C would observe that B had the promoted push priority from being pushed by volume A and treat volume B as a moving volume capable of pushing it (because B got the additional velocity from A despite having a native velocity of zero). And in the vast majority of cases (I think...), you won't have volumes that are absolutely flush with one another; there's time for the pushed-by relationship to propagate through the chain of volumes.

    Running it through in my head, it looks like that adequately causes second-, third-, etc-order effects to propagate between volumes with only a mild need for the depenetration phase to correct volume positions. And all the volumes can resolve in parallel, even if some of them will be duplicating work by testing the same interaction from different perspectives. Furthermore, since each volume will independently query the broadphase system for which volumes it should test against in a particular region of space, the broadphase system no longer has to maintain overlap counts-- which means less time spent in a code region that's already got some heavy locking contention on it.

    There is, however, one more problem. Going back to the first example, notice that volume C registers itself as a pusher of volume A because of the spurious collision detected. That means that A will inherit the velocity of C on the next frame, floating off of the surface of B by C's velocity. My hope is that, similar to the way it takes a frame for second-order effects to propagate but the results are likely good enough, the fact that in most cases a character won't be moving six times its width in a single frame means whatever imprecision actually exists should be less observable. And if something terrible does happen because of it? Well, fodder for the speedrunners, I guess. Or I just rewrite it again. :lol:

    Seriously, going to set aside some time to get rolling on the world of solipsistic bodies approach this weekend.

    My favorite musical instrument is the air-raid siren.
  • PeewiPeewi Registered User regular
    From the stuff I can remember reading about collision detection years ago, the common responses to moving an object more than its size in a single frame are "just don't do that" and if you really wanna do it, split it into multiple smaller steps.

  • KupiKupi Registered User regular
    Peewi wrote: »
    From the stuff I can remember reading about collision detection years ago, the common responses to moving an object more than its size in a single frame are "just don't do that" and if you really wanna do it, split it into multiple smaller steps.

    Yeah... for fun/out of boredom once I set up a thing that would just shift a sprite around at variable velocity, and I found that at 60 FPS, anything above 16 pixels in one frame was pretty terrifically fast, and most characters in a pixel game are going to be around 32x32 at least.

    My favorite musical instrument is the air-raid siren.
  • CornucopiistCornucopiist Registered User regular
    Kupi wrote: »
    Peewi wrote: »
    From the stuff I can remember reading about collision detection years ago, the common responses to moving an object more than its size in a single frame are "just don't do that" and if you really wanna do it, split it into multiple smaller steps.

    Yeah... for fun/out of boredom once I set up a thing that would just shift a sprite around at variable velocity, and I found that at 60 FPS, anything above 16 pixels in one frame was pretty terrifically fast, and most characters in a pixel game are going to be around 32x32 at least.

    TBH when I thought about this stuff for Turbinehead making object-shaped prisms the length of each objects travel seemed to be how I'd do it...

  • CornucopiistCornucopiist Registered User regular
    nerve wrote: »
    I really like the look of the FNS game.

    Regarding your landscape generator, how are you choosing when to change terrain types (e.g. river, desert, etc.)?

    So, for the previous incarnation I did terrain types by height. I then added river sources at random spots, and ran those downhill with a simple compass rose walk if it got stuck in a depression. This made for nice lakes, but I would have set a 'water' level next.

    For the ongoing project I will create a perlin landscape with a certain smoothness. This means that I can poll between points that are (squareside) apart once for each square, the poll points being central. I'll do a set of five adjacent squares (compass again and central) and this for each of four central points themselves arranged compass around the square to test.
    That gives me for the square to test the lowest edge, and for each edge whether or not it's the lowest edge of the adjacent square. Water starts at the central points and runs to the center of the lowest edge. So for each square, I know which edge is receiving water, and I can draw the rivers that result from passing water on the lowest edge as well as from the central source. Height gives an assumption of the amount of water received, and I can overly a Perlin wetness so I can cull rivers that would be too dry...
    Depressions will be lakes, and in flat areas I will switch to Perlin to create meandering rivers based on height and wetness.

    After, that is, I work on the games that are first in line for release... no sir, no procrastination nor cherry picking here...

  • LilnoobsLilnoobs Alpha Queue Registered User regular
    edited June 2020
    Well, the ARPG Dungeon Crawler thing I've been working on got accepted to the Unity Store, so I feel like that is something!

    https://assetstore.unity.com/packages/tools/integration/arpg-attributes-items-skills-169394

    If anyone's interested, I have free vouchers that I can use to give the package away to others, feel free to DM me for it.

    While I was waiting Unity's current 12+ business day review period, I've separated out some of the more common systems and created "No Frills" packages for Loot, Attributes, Equipment, and Inventory (& Items), partly to monetize what I've learned from the experience, and partly to reflect on the experience. These will be $5 packages that just do that one thing and allows others to incorporate it into their own project.

    They are under review, but the documentation is here:
    For the "No Frills", the systems were built with the limitation that prefabs would be the only common external resource that I can expect. This changed some of their designs from the ARPG, which had a more controlled context so I could make a few more assumptions about things. It was an interesting way to re-contextualize the scripts while also reflecting on what I previously learned.

    There's also that recent Unity Fantasy Bundle that I bought that I need to go through, and then there's also this weird Voxel Claymation that I think will work well with some other thing I've been working on.

    So much to do, so much to explore! I just hate that all of it is in front of a screen and sitting. Sigh.

    Lilnoobs on
  • PeewiPeewi Registered User regular
    I replaced all my semi transparent rectangles with stuff from a pixel UI pack and I think it's made my menus look a lot nicer
    RSBIXOa.png
    QSpco04.png

    Since posting last, I've also added the ability to set multiple keybinds to a single action and the extremely luxurious feature of saving and loading the options and keybinds.

    I tried using a pixel font, but somehow it looked absolutely awful when scaling it. I think ideally I wouldn't use Monogame's spritefont system at all, because it's not very good.

  • CornucopiistCornucopiist Registered User regular
    Quickly popping in for pic of the current state of the river generation. This version only creates downhill valleys. It doesn't link to valleys coming from upstream.

    ufntxljv6svb.jpg

  • CornucopiistCornucopiist Registered User regular
    And the final version (for now):09h2t7rx89rx.jpg

  • SurfpossumSurfpossum A nonentity trying to preserve the anonymity he so richly deserves.Registered User regular
    edited June 2020
    Gonna post another update here since I'm enjoying the sort of log it creates.

    c6a2dfbgxw3q.png
    Surfpossum wrote: »
    My "game" is now officially a game, because there is now a number that goes up whenever you clear out an expression and that's what makes something a video game.
    It's still nowhere near done, of course, but I also tore large chunks of my code apart and rebuilt it so that objects interact by politely calling functions instead of reaching into each others' guts and rearranging things and added a quick and dirty screenshake and some other stuff but the most important thing is numbers going up.

    http://surfpossum.com/applets/dragAndDrop/index.html (mobile res)

    http://surfpossum.com/applets/dragAndDrop/index2.html (desktop res)

    (this is in the readme but drag the tiles on the bottom into an operation and then onto the yellow target or another tile, or drop whatever you're dragging into [an empty] green bay to store it or onto the red trash can to get rid of it)

    I did a fair amount of reworking under the hood, reorganizing things so that objects that get dragged through call functions on the dragged object instead of editing variables directly. This does mean I have had an explosion of functions but oh well. It makes it so much easier to track down unexpected behavior when you know that X thing happens in the X function.

    I've gotten started on intelligent target generation; it's got a ways to go still, but at least the basic framework is in place. And now there's a score counter that goes up and generates a new target when you reduce a target to zero!

    I thiiiiiiink it's pretty bug free currently? At least when it comes to the math. Everything seems to fail when it should and complete when it can. Also fixed a bunch of bugs related to the text on the tiles.

    I also added a rough screen shake which currently just moves all the children of the main canvas around a bit.

    And finally, I've added the ability to "flick" tiles; if the tile you're dragging already has an operation then it'll try to perform the operation on the target, otherwise it'll "pick up" the first operation it hits.

    Still on the to do list:
    - enable dropping tiles with an operation into a full bay instead of requiring that they be dragged onto the tile in the bay
    - make the intelligent target generation more intelligenter
    - add a term limit to bays (to prevent overly long expressions; I think this is gonna be better than text scaling or whatever)
    - add energy that drains with operations so that stuff doesn't just go on forever
    - add a deck of tiles instead of generating random ones (the inspiration for this game is Solitairica, btw)
    - enable trashing of targets
    - make sure that flicking takes into account screen size since I just discovered that resolutions were having a large impact

    Surfpossum on
  • KupiKupi Registered User regular
    Kupi's Weekly Friday Game Dev Status Report

    With the help of two extra days tacked onto the weekend, I have successfully gotten my new collision detection system to the point where it passes a unit test! A unit test! The absolute simplest possible one where a single moving circle bumps into a non-moving square. But hey, it's a start. I can keep going and confirm further things later, but for now, that's where I am.

    Early on in the week, I realized something-- my proposed design from last week was actually way more complicated than it needed to be! The fundamental principle was that every collision volume would essentially live in its own universe, responding to the movement of all other collision volumes as if they themselves had no interactions in the same frame. The "shadows" of the other volumes would move with a velocity based on which other volumes pushed them in the last frame. I dithered on this issue for a while-- how do you register and compile the final shadow velocity? There are second-and third-order effects, and the way that velocity transfers among volumes depends on the surface normal of the impact. But then I realized-- ah, yes, of course there's someone who knows exactly how all the push forces transferred last frame-- the volume itself! During the last frame! So now, instead of registering which volumes pushed it last frame and then trying to work backward from that to determine what velocity it ought to have, the volume just carries whatever its velocity and promoted push priority was into the next frame as a separate variable.

    Plans for this week are to more thoroughly test the new collision detection system, apply it to the stress test (which won't be a fair appraisal since I'm now separating "bodies" from "triggers" and I haven't reimplemented the "triggers" half of that), and then... if I can get the Muse to cooperate, do some art practice.

    My favorite musical instrument is the air-raid siren.
  • PeewiPeewi Registered User regular
    Kupi wrote: »
    Kupi's Weekly Friday Game Dev Status Report

    I don't know if you've posted about it before, but are you designing this collision detection for a particular kind of game or making a general purpose system? Just curious.

  • KupiKupi Registered User regular
    Peewi wrote: »
    Kupi wrote: »
    Kupi's Weekly Friday Game Dev Status Report

    I don't know if you've posted about it before, but are you designing this collision detection for a particular kind of game or making a general purpose system? Just curious.

    My current target is a Sonic-like platform game, which is (in part) why I've been rolling my own collision-- it's exactly the kind of game where "moving at really terrific velocity in one frame" can become a concern where more general-purpose physics implementations can fall over. More generally, I've always had, like, three or four different platform game premises in my back pocket, one of which is a kind of bullet hell crossed with a Metroidvania, so having a launchpad for all of them to work off of feels like a good idea (even though, yes, received wisdom is you should only write code for your current game).

    Plus, I've had this particular physics system / shape-casting library in various stages of completion for years, so I'm also operating under a certain cussed determination to make it part of a finished product. :lol:

    My favorite musical instrument is the air-raid siren.
  • PeewiPeewi Registered User regular
    I did a graphics


    I've added a basic death animation, camera kick when shooting and a particle trail behind the bullets.

    It's still an extremely basic game, but I think this makes it a bit nicer.

  • templewulftemplewulf The Team Chump USARegistered User regular
    Peewi wrote: »
    I did a graphics


    I've added a basic death animation, camera kick when shooting and a particle trail behind the bullets.

    It's still an extremely basic game, but I think this makes it a bit nicer.

    On the last game jam I was in, our game wasn't the most ambitious, but it had a start screen, game over, even credits! I was weirdly proud of how complete a product it was

    Twitch.tv/FiercePunchStudios | PSN | Steam | Discord | SFV CFN: templewulf
  • PeewiPeewi Registered User regular
    Ooooh, I should make a credits screen too.

    I signed up for the GMTK Jam even though I'm not sure I'd be able to make much in a weekend.

  • RendRend Registered User regular
    Peewi wrote: »
    Ooooh, I should make a credits screen too.

    I signed up for the GMTK Jam even though I'm not sure I'd be able to make much in a weekend.

    Hey so did I!

    I’m really pumped for it tbh. It’s been a while since I stretched my game dev muscles

  • templewulftemplewulf The Team Chump USARegistered User regular
    Peewi wrote: »
    Ooooh, I should make a credits screen too.

    I signed up for the GMTK Jam even though I'm not sure I'd be able to make much in a weekend.

    Honestly, game jams seem to be less about your game making skills and more about your content cutting skills!

    Twitch.tv/FiercePunchStudios | PSN | Steam | Discord | SFV CFN: templewulf
  • nervenerve Registered User regular
    Hi, everyone. I'm currently working on remaking my character model and once I finish I will update my character animations. I am thinking of making a few abilities like a Leap Attack and maybe a Dash Attack. When I create these animations in Blender should I keep the character in place at the origin or should I actual animate the displacement?

  • templewulftemplewulf The Team Chump USARegistered User regular
    edited July 2020
    nerve wrote: »
    Hi, everyone. I'm currently working on remaking my character model and once I finish I will update my character animations. I am thinking of making a few abilities like a Leap Attack and maybe a Dash Attack. When I create these animations in Blender should I keep the character in place at the origin or should I actual animate the displacement?

    The general best practice is to keep them in place and move in code in the game engine.

    However, it is possible to move though animation using Root Motion in Unity or the equivalent in your engine. This lets you line up movement more naturally with, say, foot placement, but it's harder to adjust later. If you find you need to change it for game balance, it's a lot harder to test out on the fly if you have to re-export your animations.

    Figure out whether that workload is worth it to you, and that'll be your answer.

    Edit: I should also say that I'm comfortable enough with Unity to make root motion work, but depending on engine and your familiarity, it can be a huge headache to troubleshoot

    templewulf on
    Twitch.tv/FiercePunchStudios | PSN | Steam | Discord | SFV CFN: templewulf
  • LilnoobsLilnoobs Alpha Queue Registered User regular
    edited July 2020
    Time to decompress some thoughts.

    For the last 2 weeks I've been thinking on how to create an "Aura" system, basically a status change that exists on or around an actor.

    Taking inspiration from Diablo and other similar games, I gave myself some design goals.
    1. I wanted something that allows one-time status changes (increase max hp, e.g.)
    2. and allows for 'pulsing' or continuous buffs (regen 3hp/sec, e.g.)
    3. and allows for AOE (Area of Effect) Auras
    4. and allows Auras to stack (or not)
    5. and can persist through scenes
    6. and exists within Unity's environment.

    At first, I thought, well auras are really just data. What is an aura but just modifiers? And what are modifiers but just numbers! so let's go with an all Scriptable Object approach! Wohoo, all aboard the hype train! Woo! Woo!

    This worked fine for the one-time changes, but encountered inconveniences when it came to 'pulsing' and AOE Auras. For those that don't work within Unity, Scriptable Objects (SO) don't have a native way to access Unity's Update() method, the frame-by-frame thread call. To get what amounts to a timer, I would need access to that in some fashion, and hooking it up to a Scriptable Object would be ugly and probably defeat an SO's design purpose.

    From here, I went with a hybrid approach between SO's and Unity's prefab system (gameobjects). The Scriptable Object maintains the data of the Aura (reference to prefab, aura name, description, unique ID, etc.) and the prefab holds a class that performs the behavior of the Aura, because really Auras are more than just data! This was my big learning moment that proved especially true when implementing the AOE portion of the Auras. How can they have AOE if an instance of it doesn't exist? How can it pulse if it doesn't change over time?

    The final design ended up being 1) a Scriptable Object that defines an "Aura", 1) a Scriptable Object that handles the Aura applying/removing(let's call it the Aura handler for now...), and 3) a Mono class that performs the Aura's basic 'behavior'. The Aura Handler ended up being a Scriptable Object because while Auras themselves might need an instance to work, I wanted the data to persist between scenes so when traveling from one to the next, it knows which Aura is currently active (and thus can re-activate it on scene load). It'll make saving states much easier for anyone wanting to create a save file.

    So, as far as I understand what I coded, the Aura Handler decides if an Aura can be applied or removed. Upon applying, it merely instantiates the prefab and tracks its existence. Upon removal, it erases its existence. The prefab itself controls the behavior of the Aura, applying its effects on Start(), using Update() for pulsing and timers. This, to me, is exciting because it feels nice.

    Now, the other problem I encountered had to do with the AOE Auras. It's pretty simple to send out checks to see if a valid target is in range and this was accomplished using Unity's physics casts. Basically at the defined "pulse", the Aura uses a physics casts to detect nearby objects and checks if they are valid. If so, apply the Aura's buff. But what if that other thing decides to move out of the radius of the Aura? How would the Aura know this? How could it? Should it even?

    Let me elaborate. It's not a problem for 'pulsing' Auras. Those Auras apply their effect at certain intervals over and over, but what about the one-time effects like increasing max hp? It only 'applies' once, and I need to remove its application when out of range. But things can move! So an object could move within the radius while the Aura is active, but after it's already initiated its "Apply" phase! And an object could move out of range after receiving a buff, but how would the Aura know? A conundrum!

    This lead to me creating a class that in many ways acts as a simple tag. The class is added to a gameobject during the AOE phase, and it merely tracks who added it and which aura. This allowed me to a create a class that checks distances, so now that the thing that is affected by an AOE knows if it's out or range and can send the appropriate signals when so. This amounts to the Aura saying 'hey, you're inside' and then the thing that is tagged replies back "hey, not anymore!". There's potential here to give that responsible to the aura mono itself, it saves what's been tagged and checks the distances instead of the object itself checking the distances. This is nice in that it potentially saves on Update calls and also removes the need for adding components at runtime, both are persuasive arguments to change the responsibility here but I need to think on it more.

    The last thing to mention is that the Auras themselves work within their closed ecosystem, but how would others extend it so the Auras can change whatever they want in their own games? For this, as is my solution to other projects, mostly because I have an irrational disdain for events and partly because I like the color scheme of interfaces in visual studio, I created an interface people can slap on to whatever. The auras all interact with that interface, so that's the connecting link.

    And after fixing a few problems here and there, it's working. The auras stack, the auras tag or remove, they pulse, and all the rest! I haven't checked scene persistence, but maybe later. I'll think on it some more in the coming weeks as I look it over to start to write the documentation for it, which typically helps me find any redundancies or ways to simplify or change.

    Auras proved a bit more difficult than I anticipated, but I think I have a better understanding of them now. Next step will be Combat Skills, which I think will be simpler in some ways because, hey, we're just modifying numbers once the skill starts and removing those modifiers once the skill ends, right?



    Lilnoobs on
  • CornucopiistCornucopiist Registered User regular
    Lilnoobs wrote: »
    Time to decompress some thoughts.

    The final design ended up being 1) a Scriptable Object that defines an "Aura", 1) a Scriptable Object that handles the Aura applying/removing(let's call it the Aura handler for now...), and 3) a Mono class that performs the Aura's basic 'behavior'. The Aura Handler ended up being a Scriptable Object because while Auras themselves might need an instance to work, I wanted the data to persist between scenes so when traveling from one to the next, it knows which Aura is currently active (and thus can re-activate it on scene load). It'll make saving states much easier for anyone wanting to create a save file.

    I'd create a persistent Mono with a list of auras, each with some data (perhaps a timer) and the objects they apply to. Depending on what the rest of the game is like, other data would also be in the same Mono. Then to keep things a bit tidy I've started using separate Monos with the logic, in this case on Update , if needed perform logic (if timer == 0 check distances and buff). Buff would have its own mono pointed at a list of gameobjects (creatures) etc.
    You have to think about race conditions a bit, but otherwise I wouldn't have logic scripts on the gameobjects themselves unless they are super simple movers or such.

  • LilnoobsLilnoobs Alpha Queue Registered User regular
    edited July 2020
    Lilnoobs wrote: »
    Time to decompress some thoughts.

    The final design ended up being 1) a Scriptable Object that defines an "Aura", 1) a Scriptable Object that handles the Aura applying/removing(let's call it the Aura handler for now...), and 3) a Mono class that performs the Aura's basic 'behavior'. The Aura Handler ended up being a Scriptable Object because while Auras themselves might need an instance to work, I wanted the data to persist between scenes so when traveling from one to the next, it knows which Aura is currently active (and thus can re-activate it on scene load). It'll make saving states much easier for anyone wanting to create a save file.

    I'd create a persistent Mono with a list of auras, each with some data (perhaps a timer) and the objects they apply to. Depending on what the rest of the game is like, other data would also be in the same Mono. Then to keep things a bit tidy I've started using separate Monos with the logic, in this case on Update , if needed perform logic (if timer == 0 check distances and buff). Buff would have its own mono pointed at a list of gameobjects (creatures) etc.
    You have to think about race conditions a bit, but otherwise I wouldn't have logic scripts on the gameobjects themselves unless they are super simple movers or such.

    Ah, sorry! I forgot to mention, this is part of a series of "No Frills" packages I'm making, standalone Unity packages that allow you to plug in to your already existing system and work. What you mention about a singleton controlling race conditions is completely right, and something I even did in my complete ARPG Package.

    Perhaps exposing the methods run in Update in an interface and creating a script that "ticks" for the aura will show how people can add the aura ticking to their own game manager or w/e is controlling their order, so they can avoid it in Update(). or they can keep it. For this package, I'm okay with the buyers having to solve their own race conditions if that becomes a thing, instead of me solving it for them based on unknown context because these packages aren't priced high enough for me to put that much effort into it ;-) (and if they want it all solved, buy the complete thing instead of the pieces).

    I removed the script that tags others, and now the instance of the aura itself tracks what it has tagged and its distances, this means if the aura instance doesn't exist, then the buffs/debuffs don't exist and that data is also erased. That's something I'm okay with. The persistent holder still knows what aura should be on the actor (so then we can ask the dictionary which aura is on and re-make it if things go awry).

    Lilnoobs on
  • PeewiPeewi Registered User regular
    I've been hanging out in the GMTK Jam discord and there have been jokes about the theme being "shoot to move". For fun I converted my game into a shoot to move game.


    It was surprisingly easy and took less than 20 minutes.

  • CornucopiistCornucopiist Registered User regular
    Lilnoobs wrote: »
    Ah, sorry! I forgot to mention, this is part of a series of "No Frills" packages I'm making, standalone Unity packages that allow you to plug in to your already existing system and work. What you mention about a singleton controlling race conditions is completely right, and something I even did in my complete ARPG Package.

    I'm eyeing that ARPG package for a pickup after summer (or after my FNS demo release).

  • LilnoobsLilnoobs Alpha Queue Registered User regular
    Lilnoobs wrote: »
    Ah, sorry! I forgot to mention, this is part of a series of "No Frills" packages I'm making, standalone Unity packages that allow you to plug in to your already existing system and work. What you mention about a singleton controlling race conditions is completely right, and something I even did in my complete ARPG Package.

    I'm eyeing that ARPG package for a pickup after summer (or after my FNS demo release).

    Feel free to DM me for a voucher!

  • CornucopiistCornucopiist Registered User regular
    Lilnoobs wrote: »
    Lilnoobs wrote: »
    Ah, sorry! I forgot to mention, this is part of a series of "No Frills" packages I'm making, standalone Unity packages that allow you to plug in to your already existing system and work. What you mention about a singleton controlling race conditions is completely right, and something I even did in my complete ARPG Package.

    I'm eyeing that ARPG package for a pickup after summer (or after my FNS demo release).

    Feel free to DM me for a voucher!

    Nah I'll for over the cash, but if I get it earlier there's no way I'll work on anything close to shipping.

    Speaking of, not sure this is allowed, but if anyone in this thread has an iPhone they can DM me for a Testflight invite for #EndlessAzure. I'll hammer out the official mail to Tube later (later, the time when all marketing activities are planned, about when we get commercial nuclear fusion)

  • nervenerve Registered User regular
    edited July 2020
    Would you guys say that the graphical look of Corepunk (specifically the lighting and fog) can be achieved in URP or should I stick with HDRP ?

    Here is an example of what I'm looking at:
    https://corepunk.com/en/assets/images/screenshots/02.webp

    nerve on
  • CornucopiistCornucopiist Registered User regular
    nerve wrote: »
    Would you guys say that the graphical look of Corepunk (specifically the lighting and fog) can be achieved in URP or should I stick with HDRP ?

    Here is an example of what I'm looking at:
    https://corepunk.com/en/assets/images/screenshots/02.webp

    What's your target platform, are you a solo dev?... lots of questions to be answered. A lot depends on how your game works.

    For FNS I was on LWRP (now URP) for a while because I target mobile. I retreated to the standard shader because there were too many issues with the LWRP I wasn't getting solutions for. (Mostly to do with big meshes and lighting, but also complicated by using entities).
    For Endless Azure I switched in the opposite direction to use ShaderGraph to get nicer water.

    I took one look at the HDRP and decided that it's not for me. I *think* you can manage what you want with the URP, but I am pretty sure you don't need the HDRP.

  • KupiKupi Registered User regular
    Kupi's Weekly Friday Game Dev Status Report

    After struggling with it for two (three?) weeks, I finally got my new hyper-parallelized collision detection system into a state where it could replace the old one, and put it into production in the stress test.

    ... it massively underperformed compared to the existing implementation, unable to post a stable framerate even on the "easy" setting.

    But! The story does not end there, because while I had every suspicion that it might actually have some kind of structural issue that I wouldn't be able to see until this point, I had to know why it failed.

    <Technobabble Ensues>The first thing that popped up on the CPU profiler was that the depenetration phase was taking three times as long as anything else. I had revised the broadphase system (the one that says "hey, very vaguely, you should test these volumes for collision") in such a way that it no longer actually tracked overlap pairs itself, because most of the time the moving volumes just query for potential overlaps in their own private universe. That meant that depenetration was doing a lot more work on a single thread. Rather than restore the previous functionality in order to make depenetration phase work, I noticed that even the old implementation had the same problem that I was currently running into; as long as an interpenetration occurs, it's going to do a whole bunch of miserable work involving intersecting multiple hash-sets. So I questioned the premise and arrived at the conclusion that depenetration can be rolled into the regular work of moving volumes around-- if a volume detects that a collision occurs at t=0 in the frame, then those interactions get special treatment.

    However, even after just removing depenetration checks entirely, it still wasn't clearing the bar. The Collides() method that checks if two collision volumes collide was the culprit now, which made sense on its face but still left me suspicious. See, the new design admittedly has some duplication of effort involved-- when two dynamic (moving, updated-every-frame) volumes collide, they'll both individually perform that collision check from their own perspectives. But in the new design, the stress test doesn't actually have any dynamic-body collision in it! The "ants" collide with "blocks", but an ant never checks against an ant. So, if anything, there ought to be fewer calls to Collides(), not more. Acting on that premise, I set up a static variable that incremented every time Collides() was called, and tracked how many times it was called per frame. In the old system, there were about 600 Collides() calls per frame under the settings I was using, and in the new system, there were about 4,200 of them!

    Something was clearly amiss.

    Much flailing for answers and re-reading the new and old code later, I found the surprisingly simple solution: because of some historical reasons, the bounding box check is expected to occur before the call to Collides(). It turns out that if two volumes were even vaguely in the same area, they were performing high-precision tests. After reinstating the bounding box check, it went down to about 1,100 Collides() calls per frame. Still much higher than expected, and for that matter, the ants were bouncing strangely, like they couldn't even see particular blocks from certain angles. I suspected that removing depenetration was what had caused it, but in fact I just had a frame-of-reference error when performing the bounding box check. After fixing that, the Collides() calls per frame in the new system went down to around 170-- much less than the original system, and finally within the expected range.

    So now the new collision detection system is outperforming the old one. While it always outperforms the old one, it particularly excels when adding static volumes. The new system performs identically for my "mostly static" test cases of 20,000 static volumes and 100,000 static volumes with 100 dynamic volumes each, which demonstrates that the new system scales with dynamic objects rather than total objects. That means levels can get a lot larger in terms of terrain.

    Of course, all of my performance benchmarks right now aren't necessarily comparing apples to apples. In the old system, a volume could be both a "body" and a "trigger" at the same time, and the old system was burdened by having to check for collisions between ants. The new system is just doing body collisions between ants and tiles, so of course it's faster. So the next step is to create the trigger/hitbox collision system and add that in to see how it compares.

    And if the new system is still outperforming the old one by the time all's said and done? I ain't touching this collision detection system again. :lol:

    My favorite musical instrument is the air-raid siren.
  • nervenerve Registered User regular
    nerve wrote: »
    Would you guys say that the graphical look of Corepunk (specifically the lighting and fog) can be achieved in URP or should I stick with HDRP ?

    Here is an example of what I'm looking at:
    https://corepunk.com/en/assets/images/screenshots/02.webp

    What's your target platform, are you a solo dev?... lots of questions to be answered. A lot depends on how your game works.

    For FNS I was on LWRP (now URP) for a while because I target mobile. I retreated to the standard shader because there were too many issues with the LWRP I wasn't getting solutions for. (Mostly to do with big meshes and lighting, but also complicated by using entities).
    For Endless Azure I switched in the opposite direction to use ShaderGraph to get nicer water.

    I took one look at the HDRP and decided that it's not for me. I *think* you can manage what you want with the URP, but I am pretty sure you don't need the HDRP.

    I am a solo dev and am targeting PC and consoles but would prefer that my game can run on lower end PCs. My game is a topdown ARPG and I was making it in standard before recently switching to HDRP. It runs fine on my PC but I noticed after switching to HDRP my project builds now have low framerate when running builds on my macbook, whereas it did not previously. I'm not yet sure if this is just the up front cost of HDRP or if I it is because I have not yet tweaked all the available settings.

    The commentary on URP is not totally clear to me since it partially sounds like it was originally meant for mobile but after more recent patches may now be suitable for higher end things as well. I switched away from standard because it is going to be phased out over time. The only thing I'm concerned with is that the lighting in the example projects I have seen all have the same washed out look and I would prefer to target lighting and fog like used in Corepunk. Is global illumination required for that?

  • halkunhalkun Registered User regular
    edited July 2020
    Ok, Can someone explain to me what exactly a "Pixel Shader" is?

    So in a recent video with Rubber Ross and JaidenAnimations, Ross was explaining that Jaiden's avatar had a "Pixel Shader" applied to her eyes to make them appear in front of her hair even when they were behind her fringe (bangs). This finally made me want to go down the rabbit hole and figure out what a pixel shader was. However, when I try and get some basic basic concept of it, I always get some kind of strange, very small code examples, (but never an actual program), or vague marketing speak on GPU website. What I'll do is I'll tell you guys what I know about computer graphics and maybe you can fill in the gaps.

    From what I understand by context : A pixel shader is some kind of post-processing done by the GPU to change the color of a pixel based on some sort of vague conditions (this is where it gets fuzzy)

    For simplicity, I'm going to take out the GPU gunk and explain what I know.

    On a computer there is a section of memory where if you write to it, puts a pixel on a screen correlating to that memory location. (For example if in 24 bit mode, if you write 0xFFFFFFFFFFFF into video memory, on the next screen refresh, you will find a white pixel on your screen.) This is your framebuffer. Once upon a time the framebuffer used to be a free-for all, but under an operating system with protected memory it is now, well, protected by the kernel. Now direct access to the framebuffer is no longer available unless you ask for it like any other resource. (In the framework I use, I have to do a "lock" function so I can read pixels from the frambuffer for example, otherwise it's write-only and when I make a pixel, I have to use a putpixel() function and have my framework deal with all the kernel resource management stuff. I do have a SCREEN* pointer, but the framework *strongly* recommends that I not mess with it is it's device dependent and monkeying around in there can corrupt the data, so it's best to let the framework handle it. (Same goes for BITMAP* pointers. They want you to just pass the pointer in the drawing function)

    Ok, So now, adding GPUs to the mix.
    My experience is this is a little limited, as the only hardware GPU I've ever really toyed with is the GPU from the PSX, but it's super basic and I assume all other GPUs work like it.

    So what a GPU can do is draw things for you so you are not wasting CPU power drawing on the framebuffer yourself. In the PSX example, you queued up drawing commands such as "Draw a triangle" and you would give it x,y,u,v coordinates and color value for each of the three vertexes. Then next you would give a "draw a line" command and give the GPU a start point, end point, and a color for each point. All these commands were usually in a linked list that was set up and then you set the GPU to point at the top of the command list and let it rip!

    Now the PSX GPU couldn't actually do 3D. The GTE did all the 3D math and then re-ordered the table so the triangles were all drawn in the correct order, hence why it was a linked list, and also why you only got affine texture mapping as you couldn't interpolate the textures over a Z coordinate.

    I'm pretty sure that modern GPUs are all 3D, so you pretty much just pass it 3d polygons, (I think even lighting now?) anyways, it's ALL taken care of by the GPU, you just really give it the polygons in the scene and it's all textured, ordered, clipped, and lit for you (At least, that's what I gather from OpenGL commands.)

    Sooo Pixel shaders.

    So I'm assuming that after you have you scene all created and ready to get rendered on the screen, there is now a mechanism where you can alter the pixels after the fact based on some kind shader code. This is where I'm a little lost. So you can change the color of the pixels on the framebuffer to be rendered. I guess the magic is you are having the GPU do it as opposed to you doing it?

    Let's take Jaden's eyes from the first example. (Her eyes are simple black circles), so a shader will know that particular pixels are part of her "eye" and always color those pixels, but yet not draw them on the back of her head when she is turned away from you? (How does that work?)

    This is the most basic example I think. I'm not including pixel shaders that behave like 3D particle emitters, or ones that alter the actual 3D geometry of a model. I have no idea how that would even work. By this point calling it a "pixel shader" a little misleading don't you think as it's obviously doing more than shading pixels after everything been placed in the framebuffer?

    So, yea, what exactly is a pixel shader? Is these a very simple example program that demonstrates one. Maybe in C? Maybe with a few triangles.



    halkun on
  • PeewiPeewi Registered User regular
    My understanding from having written a few very basic pixel shaders years ago:

    A shader is just a piece of program that runs on the GPU and therefore it can do anything you can program, though typically it is part of rendering an image.

    A pixel shader is a shader that is run for each pixel in a rendered image. At minimum it has an input coordinate for a pixel, an input image, and an output color. The result of the pixel shader having run for each pixel is put together to the rendered image. They often have additional inputs: for example a color replacement shader will need inputs for what color to replace and what to replace it with.

    Another type of shader is a vertex shader, which is run for each vertex in a 3D model.

    When I messed around with pixel shaders I was using XNA and followed this tutorial. Of course that's not entirely ideal since XNA is deprecated, but I think it'd work in DirectX Monogame.

  • LD50LD50 Registered User regular
    halkun wrote: »
    Ok, Can someone explain to me what exactly a "Pixel Shader" is?

    So in a recent video with Rubber Ross and JaidenAnimations, Ross was explaining that Jaiden's avatar had a "Pixel Shader" applied to her eyes to make them appear in front of her hair even when they were behind her fringe (bangs). This finally made me want to go down the rabbit hole and figure out what a pixel shader was. However, when I try and get some basic basic concept of it, I always get some kind of strange, very small code examples, (but never an actual program), or vague marketing speak on GPU website. What I'll do is I'll tell you guys what I know about computer graphics and maybe you can fill in the gaps.

    From what I understand by context : A pixel shader is some kind of post-processing done by the GPU to change the color of a pixel based on some sort of vague conditions (this is where it gets fuzzy)

    For simplicity, I'm going to take out the GPU gunk and explain what I know.

    On a computer there is a section of memory where if you write to it, puts a pixel on a screen correlating to that memory location. (For example if in 24 bit mode, if you write 0xFFFFFFFFFFFF into video memory, on the next screen refresh, you will find a white pixel on your screen.) This is your framebuffer. Once upon a time the framebuffer used to be a free-for all, but under an operating system with protected memory it is now, well, protected by the kernel. Now direct access to the framebuffer is no longer available unless you ask for it like any other resource. (In the framework I use, I have to do a "lock" function so I can read pixels from the frambuffer for example, otherwise it's write-only and when I make a pixel, I have to use a putpixel() function and have my framework deal with all the kernel resource management stuff. I do have a SCREEN* pointer, but the framework *strongly* recommends that I not mess with it is it's device dependent and monkeying around in there can corrupt the data, so it's best to let the framework handle it. (Same goes for BITMAP* pointers. They want you to just pass the pointer in the drawing function)

    Ok, So now, adding GPUs to the mix.
    My experience is this is a little limited, as the only hardware GPU I've ever really toyed with is the GPU from the PSX, but it's super basic and I assume all other GPUs work like it.

    So what a GPU can do is draw things for you so you are not wasting CPU power drawing on the framebuffer yourself. In the PSX example, you queued up drawing commands such as "Draw a triangle" and you would give it x,y,u,v coordinates and color value for each of the three vertexes. Then next you would give a "draw a line" command and give the GPU a start point, end point, and a color for each point. All these commands were usually in a linked list that was set up and then you set the GPU to point at the top of the command list and let it rip!

    Now the PSX GPU couldn't actually do 3D. The GTE did all the 3D math and then re-ordered the table so the triangles were all drawn in the correct order, hence why it was a linked list, and also why you only got affine texture mapping as you couldn't interpolate the textures over a Z coordinate.

    I'm pretty sure that modern GPUs are all 3D, so you pretty much just pass it 3d polygons, (I think even lighting now?) anyways, it's ALL taken care of by the GPU, you just really give it the polygons in the scene and it's all textured, ordered, clipped, and lit for you (At least, that's what I gather from OpenGL commands.)

    Sooo Pixel shaders.

    So I'm assuming that after you have you scene all created and ready to get rendered on the screen, there is now a mechanism where you can alter the pixels after the fact based on some kind shader code. This is where I'm a little lost. So you can change the color of the pixels on the framebuffer to be rendered. I guess the magic is you are having the GPU do it as opposed to you doing it?

    Let's take Jaden's eyes from the first example. (Her eyes are simple black circles), so a shader will know that particular pixels are part of her "eye" and always color those pixels, but yet not draw them on the back of her head when she is turned away from you? (How does that work?)

    This is the most basic example I think. I'm not including pixel shaders that behave like 3D particle emitters, or ones that alter the actual 3D geometry of a model. I have no idea how that would even work. By this point calling it a "pixel shader" a little misleading don't you think as it's obviously doing more than shading pixels after everything been placed in the framebuffer?

    So, yea, what exactly is a pixel shader? Is these a very simple example program that demonstrates one. Maybe in C? Maybe with a few triangles.



    I am not an expert, but I can give you some general guidance.

    A pixel shader is a program. They are programs written in either GLSL for opengl, or HLSL for directx. The programs are compiled by the end user's gpu drivers and/or directx runtime (usually when the game or level loads). When the pixel shader is invoked, it is ran against each pixel independently.

    Here is a webgl dodad that lets you play around immediately: https://www.shadertoy.com/new

  • PhyphorPhyphor Building Planet Busters Tasting FruitRegistered User regular
    edited July 2020
    Here's an overview of the entire modern graphics pipeline: https://fgiesen.wordpress.com/2011/07/09/a-trip-through-the-graphics-pipeline-2011-index/

    A pixel shader is what determines the colour of each individual pixel. Each pixel belongs to one triangle (or more in the case of blending) which determines how it appears

    Now in your specific example, that's a layman's usage, anything that does any display effect is often called a "shader" of some kind. A pixel shader doesn't normally draw something in front of something else and if it does it's not because of the shader portion. What most likely happens is that the hair is drawn second-to-last, depth writes are disabled during that and then the eyes are drawn, so they will clip the hair as if it didn't exist

    Phyphor on
  • halkunhalkun Registered User regular
    To this end, I have discovered an amazing resource as well as I asked around on other corners of the internet..

    https://thebookofshaders.com

    An online book that uses browser hooks to GLSL so you can change code on the fly while you read what it does on the page.

Sign In or Register to comment.