October 12, 2017

The Rewrite

There has been a very clear lack of posts this year, but not because I haven't been busting my butt. In my last post, I outlined what I wanted to accomplish and by what sort of time frame I was hoping to make it all happen. For the fun of it, I haven't looked at that post since I posted it in December 2016 and I feel like I can take a guess at where I wanted to be by this point.

I believe the main thing I was hoping for was getting a custom physics engine up and running, and I had debated reworking the graphics engine. I think I was aiming to get a prototype going by the end of summer. (Something I certainly haven't been able to do). Now I'm going to take a look.

Well, I certainly promised a few other things that I haven't finished yet. I also dramatically deviated from my outline. I think maybe the best approach is to outline what I have done in the last almost 10 months.

Physics
This one was a big one, I knew going into it that I would be going deep into a very technically complex topic. There is a huge difference between knowing physics engines and actually writing one. It was a challenge, which I took on and am very happy with my result. I have a very unique physics engine that should be able to do everything I want. It is extremely robust and has absolutely no issues handling high-speed collisions flawlessly. I also have a nice automated test suite to make sure all the little parts tick correctly. I still haven't really built out the part that I originally intended for my game, but it isn't a high priority because have solved most of the difficult technical challenges.

Graphics
As I had expected, I wasn't going to be doing a lot with client-side graphics. However, what I ended up doing was completely ripping out my old graphics and rewriting it and an engine from scratch. I was able to come up with something I believe can do high-quality real-time shadows. Time will tell if that truly ends up being the case, but it would be wonderful if it works out. I suspect it will be based on a few prototypes but we will see. I'm using a completely different rendering technique that I haven't seen used by anyone else as well.

The motivating factor for doing my own graphics engine is because of the art style I've decided to go with: primitive object graphics. That doesn't mean low-polygon but it might have the look of it at times. I basically want to mix high-quality lighting with basic shapes to give it a very unique feel.

Interface
I was planning to upgrade my graphics, what I wasn't planning on doing was reworking my integration with CEF. I ended up reworking my build system and dramatically simplifying it. I am now also able to make use of bleeding edge web technologies and have the ability to easily write automated tests anywhere in my software stack. I also have opened up the ability to debug every layer, which is going to be extremely useful for development. Effectively there is no guess and run, everything is testable and debuggable. The build system is straightforward and very easy to work with. Most of my frustrations with having a complex build process was having to figure out what went wrong and how to diagnose issues. Now everything is easy to debug and test.

Network
A while ago, I was using a number of libraries for networking, but they lacked customization and seemed to carry a decent amount of overhead. Seeing how I basically reworked most of the engine/game I decided to rip out the network libraries and roll my own solution. Instead of going all low-level I settled on using Netty as the backbone for my implementations. I was able to whip up a TCP client/server, an HTTP server, and a WebSocket server in a few days. Each customized to match my needs for the engine. I now have a proper authentication layer and security around the client/server as well as considerable control over how everything operates. The best part of it was I basically tossed four libraries and now use just one which does everything I need and more.

Math
Slightly unrelated, but I ended up extending my own linear algebra library because I was sick of dealing with terrible API and not being able to distinguish between immutable and mutable data structures. I even included a few useful types to deal with special cases and basically make the type system do correctness checks for me when performing calculations. All that dramatically simplified my work on the physics engine and made writing unit tests a breeze. I also wrote an arbitrary precision fixed-point library. This allows me to work at whatever precision I need, and I can use fixed-point for consistent number handling where it makes sense. I'm still using floating point for internal parts of the physics engine because it is just faster. I'm also using floating point for the graphics engine because that is just faster when dealing with it. However, I can now store and operate on numbers of any scale when I need to, something kind of important when working at solar system scales. Unfortunately, it takes up to 3 times longer to perform some calculations compared to a floating point but at other times is just as fast as floating point. Enough I won't use it for core calculations but I certainly can use it for intermediate values when transitioning between coordinate systems. It was something I was attempting to figure out previously, but now that I can scale to whatever precision I want and still run at fairly high performance there isn't any concern now.

Gameplay
After all the previous work I've finally been able to get back to working on the game itself. I spent a ton of time working out engine design and am not able to crank out features related to the game. It is extremely easy and I've come up with an architecture that looks to be robust and extensible.

Administration
I now am also using my own wiki for keeping track of architectural and development information (such as how to build the application). I wasn't sure what I wanted to do to keep track of it but figured it is better to use a real wiki with markdown over text files in the repo. I'm also recording and planning out my designs and thoughts in OneNote. It seems to work well, but I'm split on if google drive would simply work better. I'm also using a proper issue tracker now and making cards/organizing my backlog for cards. It really is just a glorified TODO list at this point, but it really helps me stay focused and provides a nice way to show progress.

Based on my outline from before, I look like I'm roughly on track. Behind on a few things, but also pulled in a lot more than I was expecting. I will be continuing to work towards a playable demo (technically I have one but its nothing special) and as I work on it I'll be continually improving the physics and graphics engine. To the end of the backlog I go, one card at a time.


December 27, 2016

Looking back at 2016

I figure with the end of 2016 coming closer it is time to look back at the year. One of the major differences this year has been my ongoing project. Not only has it taken up a large portion of my time, but it has also been a huge learning experience and refresher on things I had forgotten from my undergraduate engineering courses. I also have come to release that frequent updates to this blog are just not going to happen. It isn't because I don't want to post updates, but I tend to prefer working on my project than posting about it.

Near the end of 2015, I wanted to get back into working on my own games. Shortly after finishing my master's program I was just mentally and psychologically exhausted. I had spent a significant amount of time playing games with a few friends and started to get an itch for making my own game. I was originally thinking to make a mobile game that I would want to play while taking transit. Something I could easily pause but also be complicated and requiring a good amount of problem-solving. At the time I had spent many hours playing Factorio, and so I was wanting to create a tower defense which required you to collect resources, manage power/communications, and fend off attacks from a moderately intelligent adversary.

I got a very basic prototype done after a few weeks, but I started to envision a much bigger project. I was no longer taking transit simply because driving took about 1/5 the time. Together both of those factors made me change my mind. I began looking into technologies and libraries I could use to make it happen. Most of my development had focused on 2D but I had done some basic 3D work. In the end, I decided it would be worth my time to look into Unreal 4 vs Unity 5 as my experience with both engines had shown they had extremely robust rendering engines that would easily outdo anything I could create. They also had the nice feature of being able to support multiple platforms. The end result was my decision to go with Unreal 4, as the performance as my rendering engine exceeded what Unity 5 could deliver.

After a few weeks of hacking together the general concept for what I wanted to create I through together a basic prototype in April 2016. Which you can take a look at below:

It was very simple but gave me the foundation for what I was going to build. As time went on I slowly started to build up a robust engine which delegated the rendering to Unreal. I still use the approach today and surprisingly it allows for rapid iterations and excellent utilization of the hardware.

I had a short and bitter fight with Unreal's GUI frameworks in May of 2016 and decided to not bother fighting with them. Instead, I decided to stick a web overlay on top of the game canvas and use web technologies for my interface. On the surface, it seems more complicated, but it turns out to be significantly simpler and more productive. It also means I can create really good and solid looking user interfaces.

Everything in my game is built to be networked and scalable. Over the summer I worked on pushing towards planetary sized worlds. Optimizing and reworking terrain generation, and pushing the limits of what can be done with today's technology. Eventually, I got terrain generation to a point where I could very quickly create terrain on the fly and travel at high speeds with fairly minimal lag. This is when I started to notice the limitations of my physics engine, more on that later.

I then started working on making the terrain modifiable in September of 2016. I was originally planning to make use of an embedded database engine but I quickly came to realize that a database just isn't fast enough. That isn't to say the databases I tested were not high quality, or that they couldn't scale. It was mostly the fact I wanted to make them store and retrieve data with extremely low latency. I also didn't need to do a significant amount of querying so most of the advantages of a database were not helpful. I ended up writing my own storage system which allowed me to handle high levels of concurrency with low latency by keeping fragments of data in memory and using an asynchronous event-based approach. It worked really well and was straightforward when it came to implementing replication across clients. Something I only partially implemented, enough to prove it would work but then something else caught my attention.

The physics engine was causing most of my headaches. Again, not the fault of the physics engine. It was doing a great job for what it was designed for. It just wasn't scaling well and caused a massive amount of headaches and complexity in my engine because I was having to delegate the game structures to something the physics engine could work with. I was also suffering from issues with high speeds and the occasional terrible collision response handling. I also had to write my own 'fix' for issues with tunneling because they solutions provided by the physics engine only worked most of the time. I could consistently break it.

I had written my own physics engine for a 2D game many years ago. I was able to handle 17000 objects colliding at 60 fps using a few strategies I developed. I discovered the names of some of the strategies a few years later. It sort of hit me, there are no 2D physics engines that could handle that many objects and I have doubts any will be able to in the near future. Part of the reason is they are generalized solutions and can't make certain classes of optimizations in the problem space. Their generalizability makes them inefficient, which isn't a bad thing, it just means they are only limited to micro-optimizations. Such as using SIMD instructions, GPU acceleration, newer hierarchy structures, or newer algorithms. Whereas if I wrote my own I could tailor it to my application and only make the physics engine do the work that it actually needs to. I would also be able to deal with high-speed collisions and tunneling without having to fight against the physics engine.

After reading a number of books on the subject, a few research papers, a thesis, and looking at various open source physics engines, I started thinking about how I could integrate a custom physics engine into my game engine. It was now December of 2016, but I wasn't ready to hook it up and swap out my existing physics engine interface. I wanted to start building a prototype physics engine. Very quickly I also realized that I would need a visual debugger to help me develop the engine and come up with test cases which I could use to verify correctness as I switch from naive implementations to heavily optimized solutions. I slapped together a quick visual debugger and was able to see my simulations. I also made sure I could easily step forward through time to make sure everything was behaving correctly.

It sort of bothers me. I originally wanted to use a bunch of frameworks and libraries to make my life easier and allow me to develop the game faster. Instead, I kept having to deal with limitations and find workarounds for them and noticed my productivity decreased. I do have the fortunate benefit of not having a deadline and so I can take the hit of writing my own solutions where appropriate.

This year has been a huge learning process, a year ago I would have never dreamed of writing my own physics engine or writing something instead of using a database for storage. I certainly would have thought by this point I'd have something playable instead of a sort of early tech demo. I guess that is the problem when you have to build your own technologies to solve problems others have yet to solve.

Well enough about the past, where am I going next?
Short-term (Q1 2017)

  • [Physics] Expand capabilities of the physics engine, such as supporting more shapes
  • [Physics] Deal with high-speed interactions and tunnelling
  • [Engine] Switch from existing physics engine to custom physics engine
  • [Graphics] Finish implementing client-side terrain updates

Medium-term (Q2-Q3 2017)

  • [Graphics] Determine what type of art style to use.
  • [Physics] Optimize custom physics engine including special handling for game
  • [Gameplay] Start working on core gameplay
  • [Interface] Revisit UI with updated technologies

Long-term (Q4 2017+)

  • [Gameplay] Get playable demo
  • [Graphics] Polish rough areas

September 11, 2016

Summer 2016 and some thoughts

It only feels like it was only a few weeks ago I posted something here. However, time currently tells me it has been more than two months, so much for my biweekly posting plan. I've been pretty busy making more progress with my project but also been feeling a bit of a burnout and needing some time to relax. Most of the last two months has been dedicated to solving some of the harder problems in my project. Some of the challenges involved sketching out pages of design for weeks at a time without writing any code. Others were solved by writing tests or small prototypes.

I had hit a complexity wall. Sometimes you get to a stage in a project where you know what you want to do but cannot wrap your head around the complexity of it. On paper it seemed straight forward, but once I started to code it I had to fight with making sure it was doing exactly what I wanted. I was entering into territory where writing tests was simply not sufficient. As it evolved and changed any tests I wrote were quickly obsolete and meaningless.

It felt like I was back to when I was first learning programming and trying to create a text-based game. I was about 10 years old and had been coding for only a few months, but I had created a game that was about a 1000 lines long. It was my first experience of the complexity wall and my realization where writing software is hard and time consuming. Years later I hit it again, at the time I was working on creating fairly complex game inside of Warcraft 3. I was getting close to 10k lines of JASS2 (the scripting language for Warcraft 3).

There was a realization; software is developed on a spectrum. On one end you can write code which is fragile and small, but can get the job done quickly. Or you can write code which is resilient and large, but takes a long time to get the job done. I noticed over the years how the fragile and small code is a common theme among programming contests and many successful projects. It is really attractive because it takes the least amount of time to get something done. In the case of programming contests it gives you more time to think about solving the problem and lets you finish the contest in less time. In the case of successful projects, it allows you to get the product out and start making money sooner.

I've also seen the dark side of fragile and small code, one that is often ignored. I've seen many games start out good, push out features quickly, and start making money. Then a few years go by and the project ends up being abandoned because progress slows down to a grinding halt, bugs are constantly plaguing the game, and the money runs dry which forces the developer to stop supporting the game. You frequently hear the words 'rewriting', 'bug fixing', 'new systems', etc. If you don't believe me, look at Steam and a vast majority of the indie games... the ones you only heard small things about and the ones that haven't turned into wildly popular games (the ones that after a few years are still < 100k copies sold). It makes me wonder if we are going to have a repeat of the 'video game crash of 1983', mobile apps have not helped with this. Writing resilient and large code is not ideal when it comes to games either. It simply takes too long to get the product out the door, many game projects from start to 'release' tend to take a few years or less. Early access has helped but fragile and small software can also take advantage of early access.

So what is ideal? Well, the short answer is nobody really knows. The long answer depends on what type of software you are producing. If you are writing flight software for an aircraft, or writing software for autonomous vehicles, or safety systems to support industrial applications, then writing resilient software with minimal bugs and high reliability is critical. If you are writing a mobile game, or a simple app for a device where bugs are not critical and the market is constantly shifting to the next hot thing, then getting your app out as quickly as possible is critical. You cannot escape the project management triangle:

Indie developers really are stuck with 'Cheap' so there are really only 2 best case options:
- Cheap & Fast, but not Good
- Cheap & Good, but not Fast
When I say 'good' I am not referring to the quality of the game, but the quality of the software produced for the game. When I say 'fast' I'm not referring to the game running well on low-end machines or getting high frame rates, but the game getting released sooner than later. Those with a keen eye will realize if a game is getting delayed it is because they are trying to go from Cheap & Fast to Cheap & Good. As far as I know, in software there is no feasible way to do it. You can go from Cheap & Fast to Fast & Good, but it seems to kill most indie games.

So where am I going with all this? Well, with my current project I find the Cheap & Fast to be unacceptable. Actually I generally stick to Cheap & Good, which typically means it takes a long time for me to write a project. It is also what turned me off from coding contests and coding competitions. The realization they promote writing terrible to maintain code and reward those who can make something work quickly even if the code itself is throw away. The problem is few want to invest the time in a contest or competition to have a competition for creating a robust system which scales and has a high quality. However, the real-world is that kind of competition and the success of a project is almost never on the quality of the software itself, but on the quality of the overall product in market. The issue is, low quality products can become popular and high quality products can end up never being profitable. However, when I think about it high quality products which do become successful tend to stay popular over the long term. I cannot think of any low quality products which have stood against the test of time. I believe that is why I tend to prefer the idea of a Cheap & Good project over Cheap & Fast project.

I've found a sort of way to cheat in the fast part, with two simple things: technology and architecture. It basically means my speed is limited to the technologies I choose. Which is why I'm continuously looking for better frameworks/libraries/languages to use. If I don't, then my development speed is constantly the same speed which really limits the types of projects I can take on. The other part is architecture, something which you must learn and experience to really understand. Part of the reason I am always working on projects on the side because it takes a long time to improve on if you don't build an understanding of the ways you can solve problems or improve your knowledge. It doesn't mean I am breaking the project management triangle, it just means I'm improving my overall resource capabilities. A diagram to visualize:
Essentially, you can never get into the fast area, but you can certainly get closer to it by improving your total resource capabilities. In the diagram this is going from the black circle to the white circle. Obviously getting to the white circle is never going to happen, but moving away from the black one happens every time you improve your tool-set, gain knowledge which helps you solve a problem, or find a better approach to the overall problem. Yes, every time you do one of those you will pay a little bit in the cost area, but the gains are higher productivity while maintaining high quality and keeping the overall cost of the project low.

A real-world example would be game engines. You could write your own, you could use Unity, Unreal, or another one. Every time you write your own there is a cost, the time it will take for you to write it. Unity and Unreal usually come with a price tag or take a slice out of your profits. The decision you need to make is deciding if the cost is worth the benefits. Sometimes using a framework adds additional development cost with no long-term benefits. Other times using a very popular and well maintained open source framework comes with minimal development costs and significant long-term benefits. It is a hard problem knowing when to write your own, when to stop using a framework and when to pick up a new one. The same can be applied to other aspects of software development, such as languages, design tools, development environments, deployment tools, build tools, source control, etc.

Sometimes I write my own, but it is usually for smaller components and typically I look into how others before me have solved the problem. Building on the knowledge and lessons learned from others will tend to be better than pretending you are an expert and writing your own solution in isolation. You can learn from mistakes without having to experience the mistakes first hand. Don't reinvent the wheel, but when you need to make some special custom wheel to solve your exact problem make sure you understand why the wheel shouldn't be square. Don't try to use a hammer for a screw. Picking a framework or library just because you are familiar with it doesn't mean it will help you solve your problem any faster. Maybe during the early stages you will see lots of benefits but over time you will start to notice the pains of the decision. You or the people who continue with the project will have to live with the pain of the decision you made.

After a summer of being low on the tangible productivity of my project, I still feel a very strong sense of accomplishment. I am working towards fleshing out some of the bigger and complex features of my project. I am writing them in a way I know it is very unlikely I will ever have to rewrite them. A fairly bold statement, but one I strongly believe in. I feel that if you spend the time understanding the problem and prototype out various solutions you will eventually arrive at a solution which will do everything you want it to and be extremely easy to expand or reuse. It doesn't need to be generalized, but it does need to be flexible enough so there is a straight forward way to get to your end goal(s). I think that is the advantage I have of being both the designer and the developer of my project. I know what I want, and I am the one working towards making it a reality.

A minor side note: It is a bit of a scary thought thinking about how a single large complex feature in my project took about 2 months to complete. Maybe a couple weeks less than that if I consider the time I took for vacation and just general down time.