Slightly more code this week than last, but still a slow crawl to the starting line.
☑ Room Data Structure – This ended up being sort of silly. Each screen worth of data is just a Struct with:
public int index;
Seriously though, that’s all it is (with a few get and set methods to translate the 1D array into the second dimension). I decided to offload more of the work to individual systems this time, having really disliked the amount of heavy lifting ‘rooms’ were doing previously.
☐ Entity Data Structure – For right now (which I consider a data testing phase), I’m working with every Thing being represented as a simple Struct with:
public float2 loc;
public int dep;
public void SetLoc(int x, int y)
loc.x = x;
loc.y = y;
This is likely to change depending on how much I dislike the component handling. The index of the larger array doubles as the Thing‘s reference index, for both rooms and components. Dep is short for depth, just my way of looking at layers.
☐ Component Data Structure – So I’m a little iffy on this one. I’m waffling between a dictionary with the entities index as the key, or a hashset with an index int as part of the component data. The latter seems easier to use, but like it will lead to a ton of redundant data. I need to think about this some more.
☐ Brain Structure – So far I’m representing each brain as three pools of data consisting of groupings of function, neurons, and synapses. Extremely early days with respect to this set up. I’m still pondering how I’m going to run it all in the discrete time steps of a turn based game. Currently I’m considering only running some functions when needed, or even processing some internal functions in real time (ongoing bodily function that might not require stimulus for early processing, for example). I suspect I will have to experiment a bunch with this framework. I’ll talk about the sparse data storage, stimulus representation, and phase systems in an article next week, so stay tuned.
☐ Brain Debugger – This will be a lengthy and ongoing task, but as I don’t want to cheat the only way to even begin debugging is similar to how it might be done in real life. Poke the brain, or apply specific stimulus and see what responds. Display will be interesting, but the real challenge is having some way to mirror the brain into a closed environment for testing, at will. I have some early ideas to test.
⌚ Screen Scrolling – On hold for now. I have more important fish to fry currently, but as I described last week this really won’t be so difficult. They way I’m handling the room data actually makes this easier than my last go round.
Thank you for reading! Please stay tuned.