So I nearly ragequit my schedule today because after spending two days implementing substrings ( i.e. firstsmodel_thirdcolor is followed by secondmodel_firstcolor is followed by secondmodel_secondcolor and so on..) which was hellish, I finally fixed it.
Then I built it.
Then the button to increment the models and colors randomly didn't work.
Totally randomly, and also no error messages.
After dinner turns out it's the fifth time a button doesn't work because a text field (invisibly) halfway overlaps it.
Allrighty then.
The good news is I spoke to an accountant and in a few days I'll be all legit an' shnozzle.
+3
KoopahTroopahThe koopas, the troopas.Philadelphia, PARegistered Userregular
edited October 2016
I'm sad you didn't have a chance to get there first @LaCabra ,but if you ever need inspiration to work on your project you should check out Aragami. It's a third person stealth ninja game where you can move around/control shadows. It instantly made me think of your progress in here.
So, I couldn't figure out why running instance create twice would only create one moving bullet, so I took the whole firing code, shoved it in a script, ran the script 5 times, and it works!
Also brought up an issue with the port sizing turning my round bullets into lemons.
Studied Raiden Arcade on youtube in slow motion, will animate bullets tomorrow.
I suppose three enemy types is enough for now, gotta work on that boss some time......fuck, static enemies, those too.
He started a gofundme page to see if he can get the financial help he needs to purchase VR equipment. Donation rewards are available to get beta plug-ins, questions answered, and so forth. Seems like the kind of stuff you guys might need or find interesting.
Need a voice actor? Hire me at bengrayVO.com
Legends of Runeterra: MNCdover #moc
Switch ID: MNC Dover SW-1154-3107-1051 Steam ID Twitch Page
That looks great. A back button might be nice in case your user misses something. In particular, the last screen feels unneeded without the ability to go back.
Working on tutorial stuff. I suck at tutorials, wish I had way more time. Audio sounds good though, I think!
*snip*
That looks great. A back button might be nice in case your user misses something. In particular, the last screen feels unneeded without the ability to go back.
So I've been continuing to work on the new machine learning system that will generate music for the game that I'll start really doing major development on in the spring.(Though I am doing some basic setup stuff currently).
This week it learned chord progressions(sort of) and harmonies(sort of). It still doesn't know rhythms yet, and it's corpus is incredibly small still.
So I've been continuing to work on the new machine learning system that will generate music for the game that I'll start really doing major development on in the spring.(Though I am doing some basic setup stuff currently).
This week it learned chord progressions(sort of) and harmonies(sort of). It still doesn't know rhythms yet, and it's corpus is incredibly small still.
Wow, that is super rad. I've been programming and playing music for a long time, and I'm not even sure how to approach that topic. Do you have any links on the subject?
I ended up cutting the Q&A section due to some technical issues (mostly me being too unskilled at Final Cut to edit it together properly).
It's one of the first talks I've given so I'd love any feedback.
This is awesome.
I have a question though. Let's say I wanted to implement some AI using a behavior tree. Sounds like I'd have a component with the root node of the behavior tree in it, and then a system which executed that behavior node. The node behaviors themselves can live somewhere else, stateless, and take in all necessary parameters they need to do what they need to do.
But that AI component might require several other types of components, like a position component for when the AI should move, various types of action components for when the AI should execute other actions like attacking, etc. A node which moved the entity would require a movable component passed in. But, my initial expectation is that components should encapsulate themselves and not require other components.
Is that accurate? How do you deal with situations like AI where many components may need to be controlled or affected by a single source? Or, am I making it too complicated, and is it standard operating procedure for a component to assume or require the presence of other components, and access them?
So I've been continuing to work on the new machine learning system that will generate music for the game that I'll start really doing major development on in the spring.(Though I am doing some basic setup stuff currently).
This week it learned chord progressions(sort of) and harmonies(sort of). It still doesn't know rhythms yet, and it's corpus is incredibly small still.
Wow, that is super rad. I've been programming and playing music for a long time, and I'm not even sure how to approach that topic. Do you have any links on the subject?
It sort of depends where you're approaching it from. I'll plug the research lab that I'm working with, which has several links to other metacreation-related stuff. We don't just do music though, it's pretty much all generative art(including art and dance, etc). Also generally if you look up Eigenfeld, Pasquier, and Conklin in scholarly papers, they do a bunch of work in the field.
The approach sort of depends on what you want to get out of the system, as some research is focused on real-time generation, often interactable(which is what I'm aiming for, in order to be integrated into a game), and some deals with, say, creating 100,000-some melodies and then analyzing them to pick the "best" one. Also, the computer scientists tend to focus more on being able to emulate a corpus, and the musicians tend to be more about trying to also maintain an amount of compositional agency over the result.
This system is currently pretty basic, so I can easily outline what it does:(spoiler for length)
Basically it's in two parts. One is the analysis/machine learning part, which takes a corpus(selection of pieces) and extracts a ton of data, then analyzes that data.
Right now it generates for each note the following features:
Pitch, Duration, Time until the next note(for rests), Contour(in a +1/-1 sense) from the last note, Contour so far in the piece, Place in the measure, total time from the beginning of the piece in 16th notes, Contour of last 5 notes alone, distance from the last note, contour(in terms of actual distance and not simply up/down) leading up to that point, difference in duration and time to next note, Distance from the bass, distance from the root of the chord, and distance from the Melody(for harmony parts)
Then it also creates a list of the last 5 pitches, the last 5 durations, the last 5 TimeToNextNote, and the last 5 distances between the notes.
Then for each part is analyzes that data and creates "Vertical Slices", which describe measures, and contain the following:
Location(which measure the slice is in)
Distances from the root note which are contained in the slice
Contour, Distance from the bass, and distance from the melody. It has these three both at the beginning of the slice and across the slice.
Then it combines the slices for each line across the corpus, so it measures averages and extremes for all of those.
Then it also analyzes the chord root for each measure(Basically taken from whichever bass note is most common in that measure)
Finally, it creates a weighted Markov chain with a max depth of 5 from the corpus.
So given a seed of, let's say C4, D4, E4, there might then be possible results of 4 F4s, 5 G4s, 3 A4s, 1 C5, 4 C4s, etc., depending on what's in the corpus.
Then it takes that data and right now for testing purposes, spits out 100 32-bar phrases with each part. It always arranges the parts as 2 melody lines(one Main melody, one Harmonic melody, which generally sticks with the melody but can split off), 2 harmonic lines(the sort of "inner riff" parts), and a bass part.
So first it generates the bass line, by using the markov chain as the base, but also forcing the bass part to stay within the acceptable root notes at each measure(So it sticks to a "progression" essentially). This is generally the closest to hard-coding it gets
Then it generates the melodies and harmonies, by generating the main melody, then the secondary, then the main harmony, and then the secondary.
At the outset, it does a markov look-up, so it finds the possible results and weights. It then looks at each of the possible results and changes the weights to fit in with several other parameters.
So, for instance, if the potential result would put the melody outside of the min/max ranges for the part at that measure, it will heavily weight that result down. If the potential result is pretty close to the expected contour, it'll weight that result up a fair amount.
This is also where it deals with "harmony", where it super heavily weights the note to fit within the distances from the root(So basically, the distances which are from corpus are assumed to be either within the chord or a reasonable nonharmonic).
Also the secondary lines weight to be within a normal distance from the melody, so that they don't, say, stick a minor 2nd below or something crazy like that(Also that does show up for isolated notes sometimes).
It's pretty limited right now, since it doesn't consider some pretty major things like "Rhythms" yet(I've got an idea for that, but haven't implemented it at all yet), and it doesn't do anything with larger form(Some of which will probably end up fairly hard-coded).
Actually the next step is getting the harmonic lines to think in terms of "riffs", where it can repeat a measure adjusted to the current root, so that it can actually have the feeling of some repeated harmonic riff, which shows up a lot in the corpus. Then rhythm, and then hopefully larger form, though that's basically going to be almost directly stolen from the corpus and vaguely hard-coded(e.g. 4-bar intro, AABA will just happen because that's a form that works and exists).
I ended up cutting the Q&A section due to some technical issues (mostly me being too unskilled at Final Cut to edit it together properly).
It's one of the first talks I've given so I'd love any feedback.
This is awesome.
I have a question though. Let's say I wanted to implement some AI using a behavior tree. Sounds like I'd have a component with the root node of the behavior tree in it, and then a system which executed that behavior node. The node behaviors themselves can live somewhere else, stateless, and take in all necessary parameters they need to do what they need to do.
But that AI component might require several other types of components, like a position component for when the AI should move, various types of action components for when the AI should execute other actions like attacking, etc. A node which moved the entity would require a movable component passed in. But, my initial expectation is that components should encapsulate themselves and not require other components.
Is that accurate? How do you deal with situations like AI where many components may need to be controlled or affected by a single source? Or, am I making it too complicated, and is it standard operating procedure for a component to assume or require the presence of other components, and access them?
While I think you want to be careful to no have your system use too many components, I also think it would be really difficult to build a real world game without having a few systems that act on multiple components.
Even for something basic like the falling system I show in that demo, I am looking at the current position component and the falling component itself, which has a speed attached, and then modifying the position. It's pretty much impossible to completely encapsulate core components, like position, to a single system.
I don't think there is a hard and fast rule here, just try to make the code reasonable to maintain, test and read.
I reached the necessary critical mass of boredom and self-loathing to spur some game development work. Rather than actually make progress, though, I decided to revisit my original Hamsterball prototype on Unity and re-create it using the techniques that I used in the Unreal version. With the Blueprint as a guide, I was able to get the hamsterball movement operating in one day, and then add some GameObject parenting to it so the ball can stick to moving platforms. Here's some footage of it bouncing around on some spinning platforms before getting frustrated and jumping off the side of the world:
As you can see, the transfer of rotational momentum isn't quite realistic yet, nor would the ball currently inherit the velocity of the platform it's attached to. However, it's more than I had at the start of the week, so hey. I'm calling it a victory.
Kupi on
My favorite musical instrument is the air-raid siren.
So have been a bit stuck since last week, due to real life stuff and laziness. However I did manage to advance on a side quest. Because I don't have a home office I've been devving on the dining room table since august. This is not optimal as that table needs to be cleared for meals and also when I am not present. Lugging my screen on and off the table was complicated because the stand comes apart. And of course there's loads of cables and whatnot and chairs bumping into the screen....
I decided to build a flight case for my screen at least. And of course I had to do it the hard way, hand-chiseling the joints. This took *some* time.
So before I was finished my existing screen got bumped by someone into something and was effectively dead.
This week my new screen arrived and I finished the basic case:
this is the back where you can see the bungee cord suspension of the frame to which the screen is mounted. You can also see that the frame was finished in a hurry...
At the moment there's no storage. All this crap will go in the doors. The front doors will swing up and down to carry keyboard and laptop base respectively.
So that's my home office almost sorted. Very productive.
@Uselesswarrior
Your talk prompted me to download Entitas and give it a whirl, and this thing is pretty freakin neato
It's immediately clear that most if not all systems require more than one component. Even their example system, the MoveSystem, requires a position component and a velocity component. Flag components with no data are significantly more useful than I initially thought they might be. After even a few basic systems are implemented it starts to become fairly clear how things fit together. A move system which processes accel, velocity, and position, a destroy system which ensures objects get destroyed at the end of the frame, etc.
Want the bullet to despawn after a certain distance? Put that in a DestroyAfterTravelingDistance component, write up the DestroyAfterTravelingSystem, and when the data matches the way you want it to, add a DestroyComponent so the DestroyEntitySystem picks it up at the end of the frame.
It's really cool how all code you write in the ECS system is immediately reusable. My player had a position and could move around. A bullet required no new code whatsoever, with the exception of the couple of lines which spawned it, and the bullet prefab that puts it in the Unity space. Then I added the despawn component and system afterward to improve it.
Don't really have much to contribute tonight, but felt like a little vent. Why oh why didn't I optimise my game before now? Currently work on a PS Vita port for my game and squeezing 1ms here and 1ms here is painfully slow (yet also quite satisfying when I shave that fraction of a second)
"Oh, well, this would be one of those circumstances that people unfamiliar with the law of large numbers would call a coincidence."
@Uselesswarrior
Your talk prompted me to download Entitas and give it a whirl, and this thing is pretty freakin neato
It's immediately clear that most if not all systems require more than one component. Even their example system, the MoveSystem, requires a position component and a velocity component. Flag components with no data are significantly more useful than I initially thought they might be. After even a few basic systems are implemented it starts to become fairly clear how things fit together. A move system which processes accel, velocity, and position, a destroy system which ensures objects get destroyed at the end of the frame, etc.
Want the bullet to despawn after a certain distance? Put that in a DestroyAfterTravelingDistance component, write up the DestroyAfterTravelingSystem, and when the data matches the way you want it to, add a DestroyComponent so the DestroyEntitySystem picks it up at the end of the frame.
It's really cool how all code you write in the ECS system is immediately reusable. My player had a position and could move around. A bullet required no new code whatsoever, with the exception of the couple of lines which spawned it, and the bullet prefab that puts it in the Unity space. Then I added the despawn component and system afterward to improve it.
This is some really cool stuff.
Awesome to hear!
Yeah I really like that style of architect. One of the "you dun good" moments I had was fairly late into the project, when I needed to make the player Sprite move to the bottom of the screen and animate at the end of a level. At first I thought I was going to have to script something very specific, but then I realized I basically had all the functionality I needed in my existing systems, it was just a matter of adding the components to my player at the right times.
Don't really have much to contribute tonight, but felt like a little vent. Why oh why didn't I optimise my game before now? Currently work on a PS Vita port for my game and squeezing 1ms here and 1ms here is painfully slow (yet also quite satisfying when I shave that fraction of a second)
It's painful no matter how many times you do it. I think there is a good argument to be made that you shouldn't try optimizing until you are feature complete because it's so time consuming (if you end up optimizing code you throw out, it's a lot of wasted cycles)
Make sure you focus your attention in the right place, I spent a day plus in my game trying to squeeze more frames on Android devices, after a lot of work and profiling I had about 2-3 frames more. Then I noticed the magical box in the Unity android player settings that said "multithreaded renderer", turning that on gave me about 10-12 frames for a second of work.
I'm keeping that button as a reserve. For me the big mobile gain was in joining all my geometry into one big mesh and applying one big texture with lots of clever uv mapping. That's pretty much what GPUs are optimised for...
But I didn't realize how massive the impact would be. I went from over 100 ms per frame to under 16ms in a few days of modelling.
Still, I'm going to have to add a lot of meshes once the programming is done. When those push the game over 16ms, then I push the multithread button. After that I guess the main thing is emptying out the updates... I early on started destroying anything I initiated.
It's painful no matter how many times you do it. I think there is a good argument to be made that you shouldn't try optimizing until you are feature complete because it's so time consuming (if you end up optimizing code you throw out, it's a lot of wasted cycles)
Make sure you focus your attention in the right place, I spent a day plus in my game trying to squeeze more frames on Android devices, after a lot of work and profiling I had about 2-3 frames more. Then I noticed the magical box in the Unity android player settings that said "multithreaded renderer", turning that on gave me about 10-12 frames for a second of work.
The game has been feature complete (and out on Steam) for a while now. This is specifically for the port work, where the target platforms are a lot more... constrained.
Turning on multithreading (for the components that support it) has really improved performance. In fact, that gives me an idea for the next bit!
"Oh, well, this would be one of those circumstances that people unfamiliar with the law of large numbers would call a coincidence."
I'm keeping that button as a reserve. For me the big mobile gain was in joining all my geometry into one big mesh and applying one big texture with lots of clever uv mapping. That's pretty much what GPUs are optimised for...
But I didn't realize how massive the impact would be. I went from over 100 ms per frame to under 16ms in a few days of modelling.
Still, I'm going to have to add a lot of meshes once the programming is done. When those push the game over 16ms, then I push the multithread button. After that I guess the main thing is emptying out the updates... I early on started destroying anything I initiated.
On Unreal this is basically a one-button job now with the Merge Actors tool. Similar in UDK with its free Simplygon integration.
+2
MachwingIt looks like a harmless old computer, doesn't it?Left in this cave to rot ... or to flower!Registered Userregular
On Unreal this is basically a one-button job now with the Merge Actors tool. Similar in UDK with its free Simplygon integration.
Simplygon looks neat, but at the moment I'm modelling low polygons in Blender. Select, extrude, deform, repeat. Most of my time is spent moving around UV maps for different colored models. I might use nurbs in my next game, so I'll have a look at it then. Blender's built in Decimate tool isn't too great on reducing poly count; my main beef is that you quickly lose the ability to do boolean operations.
Posts
http://www.fallout3nexus.com/downloads/file.php?id=16534
Then I built it.
Then the button to increment the models and colors randomly didn't work.
Totally randomly, and also no error messages.
After dinner turns out it's the fifth time a button doesn't work because a text field (invisibly) halfway overlaps it.
Allrighty then.
The good news is I spoke to an accountant and in a few days I'll be all legit an' shnozzle.
Twitch: KoopahTroopah - Steam: Koopah
Whilst I am in here look at my sick new shaders
What's this? Too many pixels!
Unreal Engine 4 Developers Community.
I'm working on a cute little video game! Here's a link for you.
Also brought up an issue with the port sizing turning my round bullets into lemons.
Studied Raiden Arcade on youtube in slow motion, will animate bullets tomorrow.
I suppose three enemy types is enough for now, gotta work on that boss some time......fuck, static enemies, those too.
http://www.fallout3nexus.com/downloads/file.php?id=16534
Lo fi is old hat, man
FARM THIS THREAD FOR SICK LEVELZ
https://www.youtube.com/watch?v=4Z_ds-aMG0c
He started a gofundme page to see if he can get the financial help he needs to purchase VR equipment. Donation rewards are available to get beta plug-ins, questions answered, and so forth. Seems like the kind of stuff you guys might need or find interesting.
Legends of Runeterra: MNCdover #moc
Switch ID: MNC Dover SW-1154-3107-1051
Steam ID
Twitch Page
https://www.youtube.com/watch?v=7viygcNck_Y
That looks great. A back button might be nice in case your user misses something. In particular, the last screen feels unneeded without the ability to go back.
https://www.youtube.com/watch?v=jQEXETwgPDs
I ended up cutting the Q&A section due to some technical issues (mostly me being too unskilled at Final Cut to edit it together properly).
It's one of the first talks I've given so I'd love any feedback.
https://www.youtube.com/watch?v=qez7Xdz8n5Y
Legends of Runeterra: MNCdover #moc
Switch ID: MNC Dover SW-1154-3107-1051
Steam ID
Twitch Page
I was actually advertising for level designers recently, ya'll snoozed on my twitter and lost
but the game should support modding when it comes out, so hopefully ya'llre still keen when the time comes
I also don't wanna move to 'Straya
100% agree. Feel dumb for missing that.
Should I have a pc capable of doing so, I promise many baklava and gyro stealing levels.
http://www.fallout3nexus.com/downloads/file.php?id=16534
that's cool urrbody works from home
This week it learned chord progressions(sort of) and harmonies(sort of). It still doesn't know rhythms yet, and it's corpus is incredibly small still.
It's starting to sound a bit like music now.
Wow, that is super rad. I've been programming and playing music for a long time, and I'm not even sure how to approach that topic. Do you have any links on the subject?
This is awesome.
I have a question though. Let's say I wanted to implement some AI using a behavior tree. Sounds like I'd have a component with the root node of the behavior tree in it, and then a system which executed that behavior node. The node behaviors themselves can live somewhere else, stateless, and take in all necessary parameters they need to do what they need to do.
But that AI component might require several other types of components, like a position component for when the AI should move, various types of action components for when the AI should execute other actions like attacking, etc. A node which moved the entity would require a movable component passed in. But, my initial expectation is that components should encapsulate themselves and not require other components.
Is that accurate? How do you deal with situations like AI where many components may need to be controlled or affected by a single source? Or, am I making it too complicated, and is it standard operating procedure for a component to assume or require the presence of other components, and access them?
It sort of depends where you're approaching it from. I'll plug the research lab that I'm working with, which has several links to other metacreation-related stuff. We don't just do music though, it's pretty much all generative art(including art and dance, etc). Also generally if you look up Eigenfeld, Pasquier, and Conklin in scholarly papers, they do a bunch of work in the field.
The approach sort of depends on what you want to get out of the system, as some research is focused on real-time generation, often interactable(which is what I'm aiming for, in order to be integrated into a game), and some deals with, say, creating 100,000-some melodies and then analyzing them to pick the "best" one. Also, the computer scientists tend to focus more on being able to emulate a corpus, and the musicians tend to be more about trying to also maintain an amount of compositional agency over the result.
This system is currently pretty basic, so I can easily outline what it does:(spoiler for length)
Right now it generates for each note the following features:
Pitch, Duration, Time until the next note(for rests), Contour(in a +1/-1 sense) from the last note, Contour so far in the piece, Place in the measure, total time from the beginning of the piece in 16th notes, Contour of last 5 notes alone, distance from the last note, contour(in terms of actual distance and not simply up/down) leading up to that point, difference in duration and time to next note, Distance from the bass, distance from the root of the chord, and distance from the Melody(for harmony parts)
Then it also creates a list of the last 5 pitches, the last 5 durations, the last 5 TimeToNextNote, and the last 5 distances between the notes.
Then for each part is analyzes that data and creates "Vertical Slices", which describe measures, and contain the following:
Location(which measure the slice is in)
Distances from the root note which are contained in the slice
Contour, Distance from the bass, and distance from the melody. It has these three both at the beginning of the slice and across the slice.
Then it combines the slices for each line across the corpus, so it measures averages and extremes for all of those.
Then it also analyzes the chord root for each measure(Basically taken from whichever bass note is most common in that measure)
Finally, it creates a weighted Markov chain with a max depth of 5 from the corpus.
So given a seed of, let's say C4, D4, E4, there might then be possible results of 4 F4s, 5 G4s, 3 A4s, 1 C5, 4 C4s, etc., depending on what's in the corpus.
Then it takes that data and right now for testing purposes, spits out 100 32-bar phrases with each part. It always arranges the parts as 2 melody lines(one Main melody, one Harmonic melody, which generally sticks with the melody but can split off), 2 harmonic lines(the sort of "inner riff" parts), and a bass part.
So first it generates the bass line, by using the markov chain as the base, but also forcing the bass part to stay within the acceptable root notes at each measure(So it sticks to a "progression" essentially). This is generally the closest to hard-coding it gets
Then it generates the melodies and harmonies, by generating the main melody, then the secondary, then the main harmony, and then the secondary.
At the outset, it does a markov look-up, so it finds the possible results and weights. It then looks at each of the possible results and changes the weights to fit in with several other parameters.
So, for instance, if the potential result would put the melody outside of the min/max ranges for the part at that measure, it will heavily weight that result down. If the potential result is pretty close to the expected contour, it'll weight that result up a fair amount.
This is also where it deals with "harmony", where it super heavily weights the note to fit within the distances from the root(So basically, the distances which are from corpus are assumed to be either within the chord or a reasonable nonharmonic).
Also the secondary lines weight to be within a normal distance from the melody, so that they don't, say, stick a minor 2nd below or something crazy like that(Also that does show up for isolated notes sometimes).
It's pretty limited right now, since it doesn't consider some pretty major things like "Rhythms" yet(I've got an idea for that, but haven't implemented it at all yet), and it doesn't do anything with larger form(Some of which will probably end up fairly hard-coded).
Actually the next step is getting the harmonic lines to think in terms of "riffs", where it can repeat a measure adjusted to the current root, so that it can actually have the feeling of some repeated harmonic riff, which shows up a lot in the corpus. Then rhythm, and then hopefully larger form, though that's basically going to be almost directly stolen from the corpus and vaguely hard-coded(e.g. 4-bar intro, AABA will just happen because that's a form that works and exists).
While I think you want to be careful to no have your system use too many components, I also think it would be really difficult to build a real world game without having a few systems that act on multiple components.
Even for something basic like the falling system I show in that demo, I am looking at the current position component and the falling component itself, which has a speed attached, and then modifying the position. It's pretty much impossible to completely encapsulate core components, like position, to a single system.
I don't think there is a hard and fast rule here, just try to make the code reasonable to maintain, test and read.
Everything is still squished for some odd reason.
The shotgun effect is working fine and actually makes for challenging gameplay off the bat.
Still need to focus on building the boss this evening after happy hour with a friend.
http://www.fallout3nexus.com/downloads/file.php?id=16534
I reached the necessary critical mass of boredom and self-loathing to spur some game development work. Rather than actually make progress, though, I decided to revisit my original Hamsterball prototype on Unity and re-create it using the techniques that I used in the Unreal version. With the Blueprint as a guide, I was able to get the hamsterball movement operating in one day, and then add some GameObject parenting to it so the ball can stick to moving platforms. Here's some footage of it bouncing around on some spinning platforms before getting frustrated and jumping off the side of the world:
https://youtu.be/xQAWYVuHTOw
As you can see, the transfer of rotational momentum isn't quite realistic yet, nor would the ball currently inherit the velocity of the platform it's attached to. However, it's more than I had at the start of the week, so hey. I'm calling it a victory.
I decided to build a flight case for my screen at least. And of course I had to do it the hard way, hand-chiseling the joints. This took *some* time.
So before I was finished my existing screen got bumped by someone into something and was effectively dead.
This week my new screen arrived and I finished the basic case:
this is the back where you can see the bungee cord suspension of the frame to which the screen is mounted. You can also see that the frame was finished in a hurry...
At the moment there's no storage. All this crap will go in the doors. The front doors will swing up and down to carry keyboard and laptop base respectively.
So that's my home office almost sorted. Very productive.
You guys have given me a lot of feedback over the course of this development and it has been invaluable. I love these forums!
@HallowedFaith that promo image is extremely sexy! Best of luck for the launch
Unreal Engine 4 Developers Community.
I'm working on a cute little video game! Here's a link for you.
Your talk prompted me to download Entitas and give it a whirl, and this thing is pretty freakin neato
It's immediately clear that most if not all systems require more than one component. Even their example system, the MoveSystem, requires a position component and a velocity component. Flag components with no data are significantly more useful than I initially thought they might be. After even a few basic systems are implemented it starts to become fairly clear how things fit together. A move system which processes accel, velocity, and position, a destroy system which ensures objects get destroyed at the end of the frame, etc.
Want the bullet to despawn after a certain distance? Put that in a DestroyAfterTravelingDistance component, write up the DestroyAfterTravelingSystem, and when the data matches the way you want it to, add a DestroyComponent so the DestroyEntitySystem picks it up at the end of the frame.
It's really cool how all code you write in the ECS system is immediately reusable. My player had a position and could move around. A bullet required no new code whatsoever, with the exception of the couple of lines which spawned it, and the bullet prefab that puts it in the Unity space. Then I added the despawn component and system afterward to improve it.
This is some really cool stuff.
Awesome to hear!
Yeah I really like that style of architect. One of the "you dun good" moments I had was fairly late into the project, when I needed to make the player Sprite move to the bottom of the screen and animate at the end of a level. At first I thought I was going to have to script something very specific, but then I realized I basically had all the functionality I needed in my existing systems, it was just a matter of adding the components to my player at the right times.
It's painful no matter how many times you do it. I think there is a good argument to be made that you shouldn't try optimizing until you are feature complete because it's so time consuming (if you end up optimizing code you throw out, it's a lot of wasted cycles)
Make sure you focus your attention in the right place, I spent a day plus in my game trying to squeeze more frames on Android devices, after a lot of work and profiling I had about 2-3 frames more. Then I noticed the magical box in the Unity android player settings that said "multithreaded renderer", turning that on gave me about 10-12 frames for a second of work.
But I didn't realize how massive the impact would be. I went from over 100 ms per frame to under 16ms in a few days of modelling.
Still, I'm going to have to add a lot of meshes once the programming is done. When those push the game over 16ms, then I push the multithread button. After that I guess the main thing is emptying out the updates... I early on started destroying anything I initiated.
The game has been feature complete (and out on Steam) for a while now. This is specifically for the port work, where the target platforms are a lot more... constrained.
Turning on multithreading (for the components that support it) has really improved performance. In fact, that gives me an idea for the next bit!
On Unreal this is basically a one-button job now with the Merge Actors tool. Similar in UDK with its free Simplygon integration.
any of you nerds gonna be at Indiecade?
Simplygon looks neat, but at the moment I'm modelling low polygons in Blender. Select, extrude, deform, repeat. Most of my time is spent moving around UV maps for different colored models. I might use nurbs in my next game, so I'll have a look at it then. Blender's built in Decimate tool isn't too great on reducing poly count; my main beef is that you quickly lose the ability to do boolean operations.
Still can't talk about it yet but.....well you get the idea.
https://www.youtube.com/watch?v=kaS_x2W3ogs