3D Programming with JavaScript


This article is based on talks I gave at JSinSA 2014 and Entelect DevDay 2014.


I needed to develop a 3D interface for simulating a two player version of PacMan. I also enjoy experimenting with different tools for game development, and so I decided to look into developing 3D applications for the web.

This article explains some of the concepts that will be useful when learning 3D Programming with JavaScript. It also contains some information about popular JavaScript based 3D and physics APIs that can ease the development process and let the developer work on ideas and not technical plumbing.

Modern Browsers


Modern browsers have evolved drastically. Browsers were initially only capable of rendering HTML and submitting forms, they were later enhanced to enable developers to load parts of the page asynchronously. The demand for more interactive applications on the web grew and led to the development and use of browser plugins like Flash. With the demand for interactive applications and compatibility on different platforms like mobile devices, browsers needed something native to allow developers the freedom to develop applications across many platforms. This is the creation of <canvas>.

The canvas element in HTML is simply an element that allows drawing of shapes on the page.

To accomodate different drawing needs, the canvas provides two different contexts:

2D Context

The 2D context allows for drawing simple shapes in a step by step manner. Drawing shapes involves setting the brush properties before drawing a specific shape. The 2D context makes it easy to draw text, lines, arcs, rectangles, etc.

3D Context

The 3D context is in the form of the WebGL API. WebGL allows for drawing any 3D geometry from any perspective or point of view. This allows for more complex shapes and geometries to be drawn.

Web Browsers

Most modern browsers as well as their mobile version counterparts support WebGL. These include, Google Chrome, Safari, Firefox, Opera, and even latest versions of Internet Explorer. This is great for diverse distribution of your application.

I don’t know about you, but I like my browser like i do the rims on my car…Chrome.


WebGL means “Web Graphics Language”. WebGL uses the canvas element to render drawings, just like the 2D context. WebGL is a JavaScript exclusive API, this is great because it works on a variety of different operating systems and browsers. WebGL is based on the OpenGL Embedded Systems 2.0 specification, due to this, it has support for general mobile device hardware. Best of all, WebGL is royalty free, there is no licence required to use the API.

Hardware Acceleration

WebGL leverages off the GPU (Graphics Processing Unit) for hardware acceleration. GPUs are geared towards graphics calculations and rendering. The aim is to calculate and render complex graphics as efficiently as possible – this is not a job for the CPU.

What’s in the box?

3D rendering consists of two main concepts, the Vertex Shader, and the Fragment Shader. This might sound complicated, but in essence:

  • The Vertex Shader is a position calculator. It handles the mathematics and calculations for converting points so that they are positioned correctly.
  • The Fragment Shader is a colour chooser. It determines what colour different elements in the 3D space should be.

Coding with Raw WebGL

WebGL is a API and has brought amazing capabilities to modern browsers. As with any raw 3D development, there are some pains.

  • WebGL has many settings and configurations. One needs to understand what settings are required, how they are used, and what the correct setting should be. These are typically settings and configurations for the Vertex Shader and Fragment Shader.
  • Rendering simple 3D shapes with WebGL can be cumbersome due to the amount of code required. WebGL expects the developer to provide all the vertices of geometries (Creating a cube involves approximately 112 lines of code). Developing and debugging this can potentially be a nightmare.
  • Too much plumbing and mathematics, not enough fun. We all want to see our ideas come to life, and not be bogged down by details.

These pains can be avoided by using a 3D library like three.js, but before jumping straight into it, there should be a way to create and maintain a standard JavaScript project.

Standardised JavaScript Projects


When starting out with using a new technology, learning a new language, or building any project in general, it’s good to have a basic template to work from. Usually the template will take care of any settings that the project requires to run, managing dependencies, managing builds, etc.

When working with JavaScript, there are an abundance of tools that will be useful in developing a project.


Yeoman is a great tool for generating a standard JavaScript project. Yeoman includes generators that will provide you with a boilerplate project for most popular JavaScript frameworks like Node.js, Angular.js, and many more. Yeoman generates directory and file structures with default Grunt and Bower configurations for the respective project.


Grunt is used to build and check your project for correctness in terms of syntax and the general semantics of JavaScript. Grunt will check your JavaScript files in realtime and notify you of any issues with your code as they happen. Grunt also has a nifty lightweight HTTP server that can be used to live deploy projects and test them during development.


Bower is a dependency management tool, it allows for dependencies to be imported and included in a project automatically. With bower, there is no need to download 3rd party dependencies manually.

Setting Up a Three.js Project

The following commands will work on any UNIX system.

Install Yeoman

npm install -g yo

Install the Three.js generator

npm install -g generator-threejs

Make a new directory for your project

mkdir threejs-project

Navigate to your new project directory

cd threejs-project

Generate a Three.js project with Yeoman

yo threejs

Deploy your project with Grunt

grunt serve


Three.js is a cross-browser JavaScript API for 3D programming. It allows developers to create 3D scenes and applications with ease. Three.js simplifies 3D programming by providing simple operations for common tasks. Three.js handles all the mathematics and basic 3D setup and configurations for the developer.

With raw WebGL, it takes around 112 lines of code to create a simple cube. Three.js allows for this to be done in a single line of code.

There are a few concepts related to Three.js and 3D Programming in general that should be understood before embarking on a project.



The scene is a container for 3D objects. The scene will hold everything that is in the 3D world. This includes any object in the 3D world.



The camera is an object that cannot be seen be the user. The scene is displayed to the user from the perspective of the camera.



Controls could be mouse, keyboard, touch events, gamepads, etc. Controls can be used to move and manipulate the camera or any other object in the scene.



Objects are 3D entities in the scene. These can be something as simple as a cube, to something as complex as 3D humanoids, vehicles, buildings, etc. It’s great that we can create objects, but what are they made up of?


  • Object Geometry: The geometry is the shape of the object. This includes all the points that make up the object as well as their positions relative to each other.
  • Object Texture: The texture is typically an image that that is overlaid over the geometry. It gives the shape any aesthetics and any effects that is required. It is the skin for the geometry.


The renderer is responsible for assembling the scene and it’s objects, and displaying it to the user from the perspective of the camera.

Physics – Cannon.js

When developing 3D applications, more often than not, there is a requirement for physics, gravity, or collision detection.

It is tedious and difficult to write custom collision detection and physics code, again, we want more fun and less mathematics and plumbing.

Cannon.js is a useful API that is compatible with Three.js for physics. Cannon allows developers to bind to existing Three.js 3D objects and perform physics calculations and manipulations on them.This is useful for simulating gravity and creating worlds where collisions actually result in an effect instead of objects overlaying each other.


3D Programming with JavaScript has many uses, here’s just a few.

  • Great for game development
  • Interactive applications for marketing
  • Simulations
  • Write once, run almost anywhere


The demo is a 3D world that renders a Twitter hashtag or search term. The application listens for tweets using the mentioned hashtag and populates the 3D world with the Twitter handles of the users that made the tweet. The aim is to display the 3D world at events and watch as the world gets more populated as the event goes by and more people tweet about it.

You can check it out here: http://www.prolificidea.com/tweetd.html

It works best on desktop as WSAD and mouse are used to control the camera within the world.



Prototyping & Unity3D


I’ve been involved in quite a number of projects that required the development of prototypes in a very short period of time. These prototypes were meant to be developed to evaluate the viability of the ideas and sometimes to serve as a proof that the application can be developed given the constraints. I recently gave a talk on rapid prototyping and Unity 3D at the Entelect DevDay. The talk was comprised of some basic theory of prototyping, as well as an overview on Unity 3D. The twist in my talk was that the presentation was a prototype that I built with Unity 3D and the content of the prototype illustrated the topics on prototyping that I spoke about. This article will highlight some of the theory and concepts of prototyping, as well as showcase the Unity 3D game that illustrated the said concepts.


In this section of the talk, I cover the concepts around software prototyping, and some general theory and strategies in tackling the development of a prototype.

What is Software Prototyping?

Prototyping is the process of creating incomplete versions of the proposed software. That sounds bad, it’s incomplete! Well, the aim of prototyping is to simulate only a few key aspects of the solution to evaluate the viability of ideas in terms of cost, and complexity. Often projects are taken on without completely understanding the effort behind achieving the requirements – prototyping can assist in the analysis of these requirements and assist in learning more about the problem domain.

Why Prototype?

  • Early user acceptance testing: Users get a chance to use and experience the product early in development. This will result in early feedback from the user base and thus allow for changes to be implemented earlier rather than later. It’s a known fact that the cost for a change in a project increases significantly in later phases of development.
  • Realise requirements and constraints that were not previously considered: By simulating some of the functionality for a product, the developer may realise side effects, constraints, or additional requirements that were not thought about. This assists in achieving a more complete and robust solution.
  • Better cost, time, and complexity estimates: By realising additional requirements and constraints early, as well as receiving user feedback early, one can make better complexity and time estimates – this overall results in better costing estimates.
  • Slaying the dragon: In software development, we speak about slaying the dragon – where a single team of heroes attempt to slay a large project/dragon. With software prototyping, we try to make slaying the dragon more like shooting fish in a barrel by tackling smaller or more complex features first.

The Process of Prototyping

  1. Identify Core Requirements: These are the requirements for the product.
  2. Develop Initial Prototype: Prototype the features that are important depending on the goal of the prototype. If complicated features with unknown possibilities exists, then tackle these first. If there are many simple features, try simulate an experience across all these features without delving into the complexity in each.
  3. Evaluate and Review the Prototype: The developed prototype should be reviewed with the target user group. The performance of the features and usability should be evaluated and measured.
  4. Revise and Enhance the Prototype: Given the feedback from reviewing the prototype, enhancements and changes can be made.
  5. Repeat: If time permits, or there still exists unknowns, the above process should be repeated.

Dimensions of Prototyping

Horizontal Prototyping

  • The aim is to provide a broad view of the entire system.
  • There will be little complexity in individual features.
  • This approach is good for websites and instances where a general feel for the product needs to be achieved- Typically applications that are targeted to the public, or applications that require intensive usability testing.

Vertical Prototyping

  • The prototype will focus on a small set of features, even one or two features.
  • The chosen features are explored and researched completely.
  • This approach is good for products where an obscure algorithm is used or something unusual or unorthodox is attempted. This is useful for applications where complex logic and processing is required.

Types of Prototyping

  • Throwaway Prototyping: This is also known as close-ended prototyping. Rapid Prototyping involves creating a working model of various parts of the system at a very early stage of development, after a relatively short investigation. This kind of prototyping is useful to show users what the feature will look like, however, the code base or project is not necessarily used for the production version of the application.
  • Evolutionary Prototyping: The main goal when using Evolutionary Prototyping is to build a very robust prototype in a structured manner and constantly refine it. The prototype forms the heart of the production application and additional features are added to it.
  • Incremental Prototyping: In incremental prototyping, parts of the system are developed as separate prototypes and plugged together to form a complete application. It is important to develop the interfaces for the separate components early, as integration may turn out to be a nightmare.
  • Extreme Prototyping: Extreme Prototyping is employed mainly for web applications. Usually in 3 phases.
  1. Static HTML is created – this gives users an instant tangible feel for the product.
  2. Thereafter, the service layer is simulated – this includes business rules and logic.
  3. Lastly, the actual service layer is developed – this involves creating a data layer as well as plugging into the front end HTML views.

This gives users an early view of the application without having actual functionality behind it as the backend will gradually come together as the process moves along.

Advantages of Prototyping

  • Reduced time and costs: By exploring the requirements and constraints, effort is better estimated.
  • Improved and increased user involvement: User involvement is important and prototypes clear up misconceptions, expectations, as well assists in gathering user feedback from early stages of development.
  • Realise oversights, additional requirements, and constraints.

Disadvantages of Prototyping

  • Insufficient analysis: The confidence in a prototype could result in further analysis of features to be abandoned. This could result in part of the system being well defined whilst the remaining parts are vague and incomplete. This can be controlled through correct processes in requirements analysis.
  • User confusion between the prototype and the finished system: If the final system is completely different to the prototype, users may be confused in how the application operates. This can be avoided by following the correct prototyping principles.
  • Expenses of implementing prototyping: Although prototyping saves cost in the actual development phase, there will be costs involved in implementing a prototyping phase. A prototyping phase should only be included in projects where it makes sense.

Unity 3D

Unity 3D is a cross-platform game engine with a built in IDE and designer. Unity 3D allows for the development of games and interactive multimedia-rich applications. The approach is write once, deploy everywhere! Unity 3D allows for applications to be deployed to platforms such as iOS, Android, Windows, OSX, and even consoles. Unity 3D also has great support for running your application within a browser using it’s custom plugin.


Unity employs typical object oriented concepts when building applications. Objects have a representation by some image/sprite, 3D model, or sound. Objects also have components with different behaviour attached to them – by plugging components to objects, we can achieve any behaviour we want. Unity 3D comes bundled with predefined logic for typical game related functionality such as physics, player controls and effects – but what about custom functionality?


Unity allows you to write functionality with and imperative language and scripting language, namely C#, and JavaScript; using the MonoDevelop Environment. MonoDevelop is a cross platform IDE. This means that you can write Unity applications on a range of platforms such as Windows, Linux, and OSX.

What’s the point if we don’t develop games?

Unity can be useful in prototyping a basic idea and testing viability across different platforms, e.g. mobile devices. Screens can be easily mocked up and small pieces of page navigation etc. can be added to receive early user feedback. Although unity is a game engine, you can develop almost any kind of interactive application with it, fast…

…and of-coarse, Unity 3D can be used to develop games! Almost everyone that’s a dev now had a dream to develop the next Super Mario or Duke Nukem in their youth.


During my talk, I then delved into the practical aspects of creating a new Unity 3D project, scenes, and objects. I also explained the use of components, assets, and creating custom components using C# and JavaScript.