Over time the engine has grown and been updated to handle many different scenarios and game types. Once of the challenges of trying to support isometric, 2d, scenegraph-based data structures etc is that as each new feature is added the engine takes a small performance hit.
One of the areas I am currently actively working to optimise is the scenegraph update and render pipeline.
At the moment the engine will effectively update and render every item in the scenegraph every frame and this is mostly fine except for two intensive tasks:
1) Depth-sorting 3d bounds-based isometric objects
2) Rendering (blitting) graphics to the canvas
Obviously the ideal scenario here is that the engine should only depth-sort or render to the canvas entities that are currently visible on the screen (viewport). This sounds relatively easy but you must take into account complex entities (composite entities that are built up from other entities who's "on screen" bounds are not just the base entity's AABB but all of the composite entity's bounds combined), multiple viewports which may be looking at different sections of the world and therefore need to render different areas etc.
To solve this problem I am proposing an update and wanted to get your thoughts and opinions before I dive into coding it.
The main issue is around how to handle view checking (is an entity "on screen") and how to de-couple the depth-sorting system from the ._children array in each object since that array is used when doing almost everything with the scenegraph.
What I am thinking of doing is creating a "broad-phase" check where every entity is added to a broad grid, with each grid section sized to the size of the screen / canvas (ige._geometry). As entities move around the world they are re-organised into the correct grid section. An entity can potentially exist inside multiple grid sections depending on if their position and bounds overlap multiple sections.
Then what I propose is to get the grid sections that the current viewport overlaps (or is inside if the viewport is smaller than the grid section) and then loop only the entities inside that grid to see if they are "on screen". After this "broad-phase" check, any entities that were set to on-screen will be added to a new _depthSortAndRender array which is used to loop and depth sort ONLY entities inside that array. Any entities that are no longer on screen are removed from it.
Finally the depth-sort will loop the _depthSortAndRender array and assign correct depths before the whole rendering sequence is processed from that array.
You should note two things that will effectively change because of this process:
1) While all entity update() methods will be called regardless of if they are on screen, only entities that are on screen will have their tick() methods called. If you have logic in the tick() method that should be processed every frame it should really be in the update() method anyway.
2) A process called "view checking" will need to run against the scenegraph using this broad-phase grid system to narrow down the checking and because of this, the grid will need to be kept up to date so that entities that move are assigned to the correct grid section arrays. The data structure for this grid will look something like this for grid section 0 x 0 (the center of the world):
Code: Select all
ige._sceneGrid = [
[
[
{anEntityObject},
{anEntityObject},
{anEntityObject}
]
]
];
Code: Select all
var entitiesCurrentlyIntersectingGridSection0x0 = ige._sceneGrid[0][0];
1) Assign grid sections on entity mount
2) Update grid sections on any entity transform (translate, rotate, scale)
3) Update grid sections on any entity bounds update (size3d, width, height)
4) Remove from all grid sections on entity unmount or destroy
The only concern I have with this system is that items like particle emitters generate and destroy a lot of entities very quickly and adding overhead to them is a slowdown in performance. Particles are also transforming constantly so will require constant grid-section re-calculation. This all adds to memory usage as well as different bits of data are cached to speed up comparisons etc.
Hope that makes sense.
If anyone has any suggestions or ideas please raise them now! I could do with a sounding board on this one as it is complex and potentially a very VERY useful update to the engine.
P.S. Anyone who has seen my tech talk at the OnGameStart conference might find this proposal familiar - a similar system used to exist in the engine before it became scenegraph based and was used to clear distinct sections of the canvas instead of the whole thing so that only sections that had changed could be redrawn - before browsers had built-in GPU acceleration that made the process redundant).