At the level there can be thousands of enemies.Defender's Quest: Valley of the Forgotten DX has always had long-standing problems with speed, and I finally managed to solve them. The main incentive for a massive increase in speed was our
port on the PlayStation Vita . The game has already been released on the PC and worked well, if not perfectly, on the
Xbox One with the
PS4 . But without a serious improvement in the game, we would never have been able to launch it on Vita.
When the game slows down, commentators on the Internet usually blame the programming language or engine. It is true that languages like C # and Java are costly than C and C ++, and tools like Unity have problems that cannot be solved, such as garbage collection. In fact, people come up with such explanations because the language and engine are the most obvious properties of the software. But the true killers of performance can be stupid tiny details that have nothing to do with architecture.
0. Profiling tools
There is only one real way to make the game faster - perform profiling. Figure out what the computer is spending too much time and make it spend less time on it, or even better - make it not waste time
at all .
The simplest profiling tool is the standard Windows system monitor (performance monitor):
In fact, it is quite a flexible tool and it is very easy to work with it. Just press Ctrl + Alt + Delete, open the "Task Manager" and click on the "Performance" tab. Do not run too many other programs. If you look closely, you can easily detect spikes in CPU usage and even memory leaks. This is an uninformative way, but it can be the first step in finding slow places.
Defender's Quest is written in high-level
Haxe language , compiled into other languages (my main target was C ++). This means that any tool capable of profiling C ++ can also profile my generated Haxe C ++ code. So when I wanted to understand the causes of the problems, I launched the Performance Explorer from Visual Studio:
In addition, different consoles have their own profiling tools, which is very convenient, but because of the NDA, I can't tell you anything about them. But if you have access to them, but be sure to use them!
Instead of writing a terrible tutorial on how to use profiling tools like the Performance Explorer, I’ll just leave a link to the
official documentation and go to the main topic - amazing things that led to a huge performance increase, and how I managed to find them !
1. Problem detection
The performance of the game is not only the speed itself, but also its perception. Defender's Quest is a tower defense game of the genre, which is rendered at 60 FPS, but with a variable gameplay speed ranging from 1 / 4x to 16x. Regardless of the speed of the game, the simulation uses a
fixed time stamp with 60 updates per second 1x simulation time. That is, if you start the game at a speed of 16x, then the update logic will actually work at a frequency of
960 FPS . Honestly, this is too high requests for the game! But it was I who created this mode, and if it turns out to be slow, the players will definitely notice it.
And in the game there is
such a level:
This is the final bonus battle "Endless 2", it is also "my personal nightmare." The screenshot was taken in the New Game + mode, in which the enemies are not only much stronger, but also have features such as restoration of health. The players' favorite strategy here is to pump the dragons to the maximum Roar level (AOE attack that stuns enemies), and behind them put a row of knights with Knockback pumped to the maximum to push everyone passing by the dragons back into their area of action. The cumulative effect is that a huge group of monsters endlessly stays in one place, much longer than players would have to survive if they actually killed them. Since the players need to
wait for the waves, not
kill them, in order to receive awards and achievements, this strategy is absolutely effective and brilliant - it was this behavior of the players that I stimulated.
Unfortunately, at the same time it turns out to be a
pathological case for productivity,
especially when players want to play at 16x or 8x. Of course, only the most hardcore players will try to get the “100th Wave” achievement in New Game + at the Endless 2 level, but they are just the ones who speak the loudest about the game, so I wanted them to be happy.
It's just a 2D game with a bunch of sprites, what could be wrong with it?
And in fact. Let's figure it out.
2. Resolution of collisions
Take a look at this screenshot:
See this bagel around the ranger? This is her area of impact - note that there is also a dead zone in which she
cannot hit targets. Each class has its own attack area, and each defender has a different size depending on the boost level and personal parameters. And in theory each defender can aim at any enemy in the area of his reach. The same is true for some types of enemies. There can be up to 36 defenders on the map (not including the main character Azru), and there is no upper limit on the number of enemies. Each defender and enemy has a list of possible goals, created on the basis of area verification calls at each update step (minus the logical cut-off of those who cannot attack at the moment, and so on).
Today, video processors are very fast - if you don’t strain them too much, they can handle almost any number of polygons. But even the fastest CPUs have very easy bottlenecks in simple procedures, especially in those that grow exponentially. That is why a 2D game can be slower than a much more beautiful 3D game — not because the programmer failed (perhaps, and this, too, at least in my case), but in principle because logic can sometimes be more expensive, than drawing! The question is not how many objects are on the screen, but what they
do .
Let's explore and speed up collision detection. For comparison, I will say that prior to optimization, collision detection took up to ~ 50% of the CPU time in the main battle cycle. After optimization - less than 5%.
It's all about quad trees
The main solution to the problem of slow collision recognition is the
partitioning of space - and from the very beginning we used a qualitative implementation
of the quad tree . In fact, it effectively divides the space so that you can skip many optional collision checks.
In each frame, we update the entire quad tree (QuadTree) to track the position of each object, and when the enemy or defender wants to aim at someone, he asks QuadTree for a list of nearby objects. But the profiler told us that both of these operations are much slower than they should be.
What is wrong here?
As it turned out - a lot.
String typing
Since I kept both enemies and defenders in a single quad tree, I needed to indicate what I was looking for, and this was done like this:
var things:Array<XY> = _qtree.queryRange(zone.bounds, "e"); //"e" - "enemy"
In the jargon of programmers, this is called code with
string typing , and, besides other reasons, it is bad because string comparison is always slower than integer comparison.
I quickly picked up integer constants and replaced the code with this:
var things:Array<XY> = _qtree.queryRange(zone.bounds, QuadTree.ENEMY);
(Yes, it was probably worth using
Enum Abstract for maximum type safety, but I was in a hurry, and I needed to do the work first.)
This change alone made a
huge contribution, because this function is called
constantly and recursively, every time someone needs a new list of targets.
Array vs vector
Look at this:
var things:Array<XY>
Haxe arrays are very similar to ActionScript and JS arrays in that they are collections of resizable objects, but they are strongly typed in Haxe.
However, there is another data structure that is more productive in the case of static target languages, such as cpp, namely
haxe.ds.Vector . Haxe vectors are essentially the same as arrays, except that when they are created they get a fixed size.
Since my quad tree trees already had a fixed volume, I replaced the arrays with vectors to achieve a noticeable speed increase.
Request only what you need
Previously, my
queryRange
function returned a list of objects, instances of
XY
. They contained the x / y coordinates of the game object to which links are being executed, and its unique integer identifier (search index in the main array). The querying game object received these XYs, extracted an integer identifier to get its target, and then forgot about the rest.
So why should I send all these references to XY objects for each QuadTree node
recursively , and even
960 times per frame? It is enough for me to return the list of integer identifiers.
PROFESSIONAL'S HINT: integers are much faster to transfer than almost all other types of data!Compared to other fixes, it was pretty simple, but the performance increase was still noticeable, because this internal cycle was used very actively.
Tail recursion optimization
There is an elegant thing called
tail recursion optimization (Tail-call optimization) . It is difficult to explain, so I’d better show with an example.
It was:
nw.queryRange(Range, -1, result);
ne.queryRange(Range, -1, result);
sw.queryRange(Range, -1, result);
se.queryRange(Range, -1, result);
return result;
It became:
return se.queryRange(Range, filter, sw.queryRange(Range, filter, ne.queryRange(Range, filter, nw.queryRange(Range, filter, result))));
The code returns the same logical results, but according to the profiler the second option is faster, at least when broadcasting to cpp. Both examples perform exactly the same logic - they make changes to the “result” data structure and pass it to the next function before returning. When we do this recursively, we can avoid the compiler generating temporary references, because it can simply return the result of the previous function right away, rather than sticking to it in an extra step. Or something like that. I do not fully understand how this works, so read the post on the link above.
(Judging by what I know, the current version of the Haxe compiler does not have tail recursion optimization, that is probably the work of the C ++ compiler - so don’t be surprised if this trick doesn't work when translating the Haxe code to cpp.)
Object pooling
If I need accurate results, then I have to destroy and rebuild the QuadTree with each update call. Creating new QuadTree instances is a fairly common task, but with large numbers of new AABB and XY objects, the QuadTrees depending on them led to a severe memory overload. Since these are very simple objects, it will be logical to select many such objects in advance and simply constantly use them again. This is called a
pool of objects .
I used to do something like this:
nw = new QuadTree( new AABB( cx - hs2x, cy - hs2y, hs2x, hs2y) );
ne = new QuadTree( new AABB( cx + hs2x, cy - hs2y, hs2x, hs2y) );
sw = new QuadTree( new AABB( cx - hs2x, cy + hs2y, hs2x, hs2y) );
se = new QuadTree( new AABB( cx + hs2x, cy + hs2y, hs2x, hs2y) );
But then I replaced the code with this:
nw = new QuadTree( AABB.get( cx - hs2x, cy - hs2y, hs2x, hs2y) );
ne = new QuadTree( AABB.get( cx + hs2x, cy - hs2y, hs2x, hs2y) );
sw = new QuadTree( AABB.get( cx - hs2x, cy + hs2y, hs2x, hs2y) );
se = new QuadTree( AABB.get( cx + hs2x, cy + hs2y, hs2x, hs2y) );
We use the open source
HaxeFlixel framework, so we
implemented this using the
FlxPool HaxeFlixel class. In the case of such highly specialized optimizations, I often replace some basic elements of Flixel (for example collision detection) with my own implementation (as I did with QuadTrees), but FlxPool is better than everything I wrote myself, and it performs exactly what is needed.
Specialization if necessary
The
XY
object is a simple class with
x
,
y
and
int_id
. Since it was used in a specially-used internal loop, I could save a lot of memory allocation and operation commands by moving all this data into a special data structure that provides the same functionality as
Vector<XY>
. I called this new class
XYVector
and the result can be seen
here . This is a very highly specialized application case and, at the same time, is not at all flexible, but it has provided us with certain improvements in speed.
Embedded functions
Now, after we have performed a wide phase of collision recognition, we need to do a lot of checks to find out which objects actually collide. Where possible, I try to do a comparison of points and figures, not figures and figures, but sometimes I have to do the latter. In any case, all of this requires its own special checks:
private static function _collide_circleCircle(a:Zone, b:Zone):Bool { var dx:Float = a.centerX - b.centerX; var dy:Float = a.centerY - b.centerY; var d2:Float = (dx * dx) + (dy * dy); var r2:Float = (a.radius2) + (b.radius2); return d2 < r2; }
All this can be improved with a single
inline
:
private static inline function _collide_circleCircle(a:Zone, b:Zone):Bool { var dx:Float = a.centerX - b.centerX; var dy:Float = a.centerY - b.centerY; var d2:Float = (dx * dx) + (dy * dy); var r2:Float = (a.radius2) + (b.radius2); return d2 < r2; }
When we add “inline” to a function, we tell the compiler to copy and paste this code and insert variables when it is used, and not to make an external call to a separate function, which leads to unnecessary costs. Embedding is not always applicable (for example, it inflates the amount of code), but it is ideal for situations like this when small functions are called again and again.
We bring collisions to mind
The real lesson here is that in the real world, optimization is not always of the same type. Such fixes are a mix of advanced techniques, cheap hacks, the use of logical recommendations and the elimination of stupid mistakes. All of this gives us a performance boost.
But still -
measure seven times, one cut!Two hours of pedantic optimization of the function, called every six frames and takes 0.001 ms, is not worth the effort, despite the ugliness and stupidity of the code.
3. Sort all
In fact, it was one of my last improvements, but it turned out so advantageous that it deserves its own title. In addition, it was the simplest and repeatedly justified itself. The profiler showed me a procedure that I couldn’t improve - the main draw () loop, which took too much time. The reason was the function that sorted all screen elements before rendering - namely, the
sorting of all the sprites took much longer than their rendering!
If you look at the screenshots from the game, you will see that all enemies and defenders are sorted first by
y
and then by
x
so that the elements overlap each other from back forward, from left to right, when we move from the upper left to the lower right corner of the screen.
One way to trick sorting is to simply skip the rendering sorting through the frame. This is a useful trick for some costly functions, but it immediately led to very noticeable visual bugs, so it did not suit us.
Finally, the decision came from one of the HaxeFlixel maintainers,
Jens Fisher . He asked: “Did you make sure that you use a sorting algorithm that is fast for almost sorted arrays?”
Not! It turned out not. I used array sorting from the Haxe standard library (I think it was
merge sorting — a good choice for general cases. But I had a very
special case. When sorting, in each frame the sorting position only of a very small number of sprites changes, even if there are a lot of them. Therefore I replaced the old sort call with
sorting inserts , and
boom! - the speed instantly increased.
4. Other technical issues
Collision recognition and sorting became great victories in the logic of
update()
and
draw()
, but many other different pitfalls were hidden in the actively used internal cycles.
Std.is () and cast
In different "hot" internal cycles, I had a similar code:
if(Std.is(something,Type)) { var typed:Type = cast(something,Type); }
In the Haxe language,
Std.is()
tells us whether an object belongs to a certain type (Type) or class (Class), and
cast
tries to bring it to a certain type during the execution of a program.
There are safe and unprotected versions of the
cast
- safe cast results in reduced performance, but unprotected ones do not.
Safe:
cast(something, Type);
Unprotected:
var typed:Type = cast something;
When an unprotected cast attempt fails, we get null, while the safe cast throws an exception. But if we are not going to catch an exception, then what is the point of doing a safe cast? Without catch, the operation still fails, but it works slower.
In addition, it
Std.is()
safe cast with the
Std.is()
check. The only reason for using a safe cast is a guaranteed exception, but if we check the type before the cast, we already guarantee that the cast will not end in failure!
I can speed
Std.is()
up a bit with an unprotected cast after checking
Std.is()
. But why do we need to re-write the same thing if I don’t need to check the type of the class at all?
Suppose I have a
CreatureSprite
, which may be an instance of a subclass of either
DefenderSprite
or
EnemySprite
. Instead of calling
Std.is(this,DefenderSprite)
we can create an integer field in
CreatureSprite
with values like
CreatureType.DEFENDER
or
CreatureType.ENEMY
, which are checked even faster.
Again, it’s worth fixing only in those places where a significant slowdown is clearly fixed.
By the way, you can read more about the
safe and
unprotected adduction in the
manual for Haxe .
Serialization / deserialization of the universe
It was annoying to find such places in the code:
function copy():SomeClass { return SomeClass.fromXML(this.toXML()); }
Yeah. To copy an object, we
serialize it into XML , and then
parse all of this XML , then instantly drop the XML and return a new object. This is probably the slowest way to copy an object, in addition, it overloads memory. Initially, I wrote XML calls to save and load from disk, and I thought I was too lazy to write the correct copy procedures.
Probably, everything would be fine if this function was rarely used, but these calls arose in inappropriate places in the middle of the gameplay. So I sat down and started writing and testing the right copy function.
Say no to Null
The equality check for null is used quite often, but when translating Haxe into cpp, an object with an undefined value leads to unnecessary costs, which do not arise if the compiler can assume that the object will never be null. This is especially true for base types like
Int
- Haxe implements the admissibility of indefinite values for them in the static target system by their “packing”, which occurs not only for variables that are explicitly declared to admit the value null (
var myVar:Null<Int>
), but also for things like auxiliary parameters (
?myParam:Int
). In addition, null checks themselves cause unnecessary waste.
I was able to eliminate some of these problems by simply looking at the code and thinking about alternatives - can I perform a simpler check that will always be true when the object is null? Can I catch null much earlier in a chain of function calls and pass a simple integer or boolean flag down to child calls? Can I structure everything so that the value is
never guaranteed to become null? And so on. We cannot completely eliminate checks for null and values that can be null, but removing them from functions helped me a lot.
5. Download Time
On PSVita we had particular serious problems with the loading time of some scenes. When profiling, it turned out that the reasons are basically reduced to text rasterization, unnecessary software rendering, expensive button rendering, and other things.
Text
HaxeFlixel is based on
OpenFL , which has awesome and reliable TextField. But I used FlxText objects in an imperfect way — FlxText objects have an internal OpenFL text field that is rasterized. However, it turned out that I didn’t need most of these complex text functions, but because of the stupid way of setting up my UI system, the text fields had to be rendered before placing all the other objects. This led to small but noticeable jumps, for example, when loading a pop-up window.
Here I made three corrections - first, I replaced as much text as possible with raster fonts. Flixel has built-in support for various raster font formats, including
AngelCode's BMFont , which makes it easy to work with Unicode, style and kerning, but the API for raster text is slightly different from the plain text API, so I had to write a small wrapper class so that simplify the transition. (I gave him the appropriate name
FlxUITextHack
).
This slightly improved the work - bitmap fonts are rendered very quickly - but slightly increased complexity: I had to specially prepare separate character sets and add logic to switch between them depending on the locale, instead of just setting up the text field that did all the work.
The second correction was to create a new UI object, which was a simple
placeholder for text, but had all the same public properties as the text. I called it the “text area” and created a new class for it in my UI library, so that my UI system could use these text areas just like real text fields, but did not render anything until I calculated the size and position for everything else. Then, when my scene was prepared, I started the procedure for replacing these text areas with real text fields (or bitmap font text fields).
The third correction concerned perception. If there is a pause between the input and the reaction even for half a second, the player perceives it as inhibition. Therefore, I tried to find all the scenes in which there is a delay in input until the next transition, and added either a translucent layer with the word "Loading ..." or just a layer without text. Such a simple correction greatly improved the
perception of responsiveness of the game, since something happens immediately after the player touches the controls, even if it takes some time to display the menu.
Software rendering
Most menus use a combination of software scaling and 9-slice compositing. This happened because the PC version was independent of the UI resolution, which could work at 4: 3 and 16: 9 aspect ratio, scaling accordingly. But on PSVita, we already
know the resolution, that is, we don’t need all these excessive high-resolution resources and real-time scaling algorithms. We can simply pre-render resources for exact resolution and place them on the screen.
First, I added conditions to the UI markup for Vita that switched the game to use a parallel set of resources. Then I needed to create these resources prepared for one resolution. The
HaxeFlixel debugger turned out to be very useful
here - I added my script to it so that it simply flushes the raster cache to disk. Then I created a special build configuration for Windows, imitating Vita resolution, opened all the game menus in turn, switched to the debugger, and launched the command to export the scaled versions of resources as ready-made PNGs. Then I just renamed them and used as resources for Vita.
Button rendering
My system UI had a real problem with the buttons - when creating the buttons, they rendered the default set of resources, and a moment later they changed their size (and rendered again) with the UI boot code, and sometimes even the
third time, before the entire UI was loaded . I solved this problem by adding parameters that delayed the rendering of the buttons to the last stage.
Optional text scan
Particularly slowly loaded the magazine. At first, I thought the problem was in the text fields, but no. The text of the journal could contain links to other pages, which was indicated by special characters embedded in the raw text itself. Later, these characters were cut out and used to calculate the location of the link.
It turned out. that I scanned
each text field to find and replace these characters with properly formatted links, without even checking first whether there is a special character in this text field at all! Worse, according to the design, links were used
only on the content page, but I checked them in each text field on each page.
I managed to bypass all these checks with the help of the if construct of the form “whether it uses the text field of the link at all”. The answer to this question was usually no. Finally, the page that took the longest time to load was the index page. Since it never changes in the log menu, then why don't we cache it?
6. Memory profiling
Speed is not just the CPU. Memory can also be a problem, especially on weak platforms like Vita. Even when you managed to get rid of the last memory leak, you may still have problems with the sawtooth memory in an environment with garbage collection.What is a “sawtooth memory use”? The garbage collector works as follows: data and objects that you do not use accumulate over time, and are periodically cleaned. But you have no clear control over when this happens, so the memory usage graph looks like a saw:Taking out the trash
, , .
, — PC
, . ( ) — , . , !
Haxe , open source, , , Unity. hxcpp API!
, :
cpp.vm.Gc.run(false); // (true/false - / )
, , , , , .
7.
All these performance improvements were more than enough to optimize the game for the PC, but we also tried to release a version for PSVita, and we had long-range plans for the Nintendo Switch, so we had to squeeze everything out of the code.But “tunnel vision” often occurs when you focus only on technical hacks and forget that a simple change in design can greatly improve the situation .Accelerate effects at high speed
16x , . — , AOE- — . .
, - 16x
8x, , 8x 4x. Endless Battle 2. , .
We also used restrictions specifically for the platform. On Vita, we skip the lightning effect when Azra triggers or speeds up the character, and used other similar techniques.Hiding bodies
And what about the huge pile of enemies in the lower right corner of Endless Battle 2 - there are literally hundreds or even thousands of enemies drawing one on top of the other. Why don't we just skip the rendering of those that we can't even see?This is a tricky design trick that requires cunning programming, because we need a smart algorithm that determines the hidden objects.Most of these games are drawn using the algorithm of the artist - the previous objects in the list of drawing overlap everything that comes after them., « » , . «» 8 «» ( ) , .
«» , «» 1 «», . «» , . , ,
.
If the predicted number of redraws is high enough, then I mark the enemy as “buried”, with two threshold values — completely buried, that is, completely invisible, or partially buried, that is, it will be rendered, but without drawing a health bar.(By the way, here is the function of checking for redraws.)For this to work correctly, you need to properly configure the resolution of the concealment card. If it is too big, then we will have to perform an extra bunch of simplified draw calls, if it is too small, then we will hide the objects too aggressively and get visual bugs. If you choose the map correctly, the effect is barely visible, but the increase in speed is very noticeable - there is no way to draw something faster than not to draw it at all !Better preload than brakes
In the middle of the fights, I noticed frequent braking, which, I was sure, was caused by a pause in garbage collection. However, profiling has shown that this is not the case. Further testing revealed that this happens when a wave of enemies starts to spawn, and I later discovered that this happens only when it is a wave of enemies that have not yet existed.. Obviously, the problem was caused by some kind of code for setting up enemies, and of course, when profiling, a “hot” function was revealed in the graphic settings. I started working on a complex multi-threaded download setup, but then I realized that I can just shove all the procedures for loading the graphics of enemies into the preload of the battle. Individually, these were very small loads, even on the slowest platforms adding less than a second to the total load time of the battle, but they allowed us to avoid very noticeable inhibitions during the gameplay.Reserve for later
If you work in an environment with limited memory, you can use the ancient trick of our industry - to allocate a large piece of memory just like that, and then forget about it until the end of the project. At the end of the project, having spent the whole available memory budget, you can be saved thanks to this “nest egg”.We were in such a situation - we lacked just a dozen bytes to save the PSVita build, but damn it - we forgot about this trick and got stuck! The only remaining option was a week of desperate and painful surgery on the code!But wait!
One of my (unsuccessful) optimizations was the loading of as many resources as possible and their permanent storage in memory, because I mistakenly assumed that the long loading time was caused by reading the resources during the program execution. It turned out that this was not the case, so almost all of these unnecessary calls for preloading and perpetual storage could be completely removed, and I had free memory!Get rid of things that we don't use.
While working on the PSVita build, we realized that there are a lot of things that we simply do not need. Because of the low resolution, the original graphics mode and the HD graphics mode were indistinguishable, so for all the sprites we used the original graphics. We also managed to improve the function of replacing the palette with the help of a special pixel shader (earlier we used the software rendering function).Another example was the battle card itself - on the PC and home consoles, we superimposed a bunch of tile maps on each other to create a multi-layer map. But since the map never changes, on Vita we could just bake everything into one ready-made image, so that it would be called up in one draw call.In addition to the extra resources in the game, there were a lot of extra calls, for example, defenders and enemies, sending a regeneration signal in each frame, even when they do not have the ability to regenerate . If a UI was opened for such a creature, then it was redrawn in each frame .There are still half a dozen other examples of small algorithms that computed something inside the “hot” function, but never returned results anywhere. Usually these were the results of creating the structure in the early stages of development, so we just cut them out.NaNopocalypse
This case was funny. The profiler reported that it takes a lot of time to calculate the angles. Here is the generated Haxe C ++ code in the profiler:This is one of those functions that take values like -90
and convert to 270
. Sometimes values like -724
that are obtained that after a few cycles come down to 4
.For some reason, this function is transferred value -2147483648
.. -2147483648 360, 5 965 233 , 0 . , (
—
update !) — , ( - ) .
, ,
NaN
— , «Not a number» ( ), , . , .
Math.isNan()
which reset the angle when this (rather rare, but inevitable) event occurred. At the same time, I continued to search for the root cause of the error, found it, and the delay immediately disappeared. It turns out that if you do not perform 6 million senseless iterations, you can get a big speed boost!(A fix for this error was inserted into HaxeFlixel itself).Do not outsmart yourself
OpenFL, HaxeFlixel . , , , . , .
- : ,
, , , « » . «» , , «», .
8. , Endless Battle 2
Yes, it is great that we implemented all these small tricks to increase speed. Honestly, we didn’t notice most of them until we started porting the game to less powerful systems, when at some levels the problems became absolutely intolerable. I am glad that in the end we managed to increase the speed, but I believe that we should also avoid pathological level design. Endless Battle 2 laid on the system too much load, especially compared to all other levels of the game .PSVita Endless 2, XB1 PS4, Endless 2. , , . , PSVita , , PS4 XB1. «endurance» - . PC Endless Batlte 2 .
, Defender's Quest II — ! , «» Tower Defense, , -, , ? , — , ..
9.
— , , , . , , , , , .
, , , «»
. . — .
I am glad that we will use all these advantages when developing Defender's Quest II. Honestly, if we didn’t make a port for PSVita, then I probably wouldn’t even try half of these optimizations. And even if you don’t buy the game for PSVita, you can thank this little console, which has significantly improved the speed of Defender’s Quest.