This project showcases a tool that is aimed at level designers and artists. It allows them to create largescale 3D game worlds using user created assets. The designer can set a multitude of rules to customise their generation process and worlds can be generated in seconds or less at the click of a button. This project was designed and programmed in Unity Engine using C#.
Developers can set custom rules and values to finetune their level generation process. This includes defining parameters to be taken into account such as the colour of the object connectors, their number of pins on each connector, and the shape of the connectors.
There is a selection of models which when selected will generate different shaped levels. Each are suited to specific problems be it labyrinthian, linear or open world areas.
This tool includes the ability to create large databases to house the assets required for generating a level. This ensures that assets specific to different aesthetics can be separated and organised efficiently.
For each object, the user will be able to create a custom data structure at the click of a button so they can store the object in their databases. Each custom data structure gives the ability to set further rules related to that object. This includes setting the frequency of appearances in generated levels, whether the object is allowed to be chosen as the initial placed piece in the world, and even if it requires a collider.
An incredible addition to the generation process is the ability to paint connectors directly onto objects which lessens the need to position manually with the mouse. It is a powerful feature which can further streamline workflows and increase productivity. The user can change the size of the paint brush, the density of connectors to paint on a location, along with changing the parameters of how the connector will look.
Another important feature is the save functionality. The user can save levels in multiple ways such as by generating a level and immediately saving it as a JSON file. It allows for quick regeneration at a later point. Instead, the user can generate a level and create a prefab object. Lastly, the user can use a seed value to create the same level any number of times to negate the need for saving.
In this video I go over the basic functionality of the tool and what it can create. Feel free to give it a watch and leave a comment on what you think. Any suggestions are welcome!
This tool shows scalability for creating largescale levels in little time. The extent to the detail
a generated level will possess depends almost entirely on the user. For example, each object can be set up
with the relevant particle effects and animations so produced levels will not be static.
Users can procedurally generate an small area, implement the interactivity and then use this as a single
entry in a database. Doing this can dramatically increase the size of a world, ensure the playability is
already there and allow for more focus to be placed in the granular detail such as lighting or
player mechanics.
Functionality for checking if NPCs and players can fully navigate the generated space will be
implemented. The focus of this tool is to ultimately retain high quality level design, potentially enhance
creativity while keeping resource expenditure down such as time. Fully fleshed out games are always in high
demand and now we could possibly have another tool that can help with this.
I've further developed the idea of 'Snappable Meshes' by adding new features and improving the user friendly approach. I aim to continue making this a useful and scalable tool for developing new game worlds. This will certainly be helpful for my company and hopefully for other Indie development teams!
Check me out on LinkedIn where you can keep up to date on what I'm up to and where this tool is at. I hope to release it on the Unity Asset store at some point very soon.
Thank you to the Lusófona University team that created the tool my work is inspired and based on. Please check out their paper which explores the concept of 'Snappable Meshes' here:
https://ieeexplore.ieee.org/document/9760462
You can download their project here and see how it works:
https://github.com/VideojogosLusofona/snappable-meshes-pcg
I would also like to give a shoutout to my supervisor Naman Merchant, for
constantly challenging me on my methods and making sure I understood
what I'm talking about.
This is a simple Augmented Reality (AR) project which enables the user
to destroy structures with projectiles. A rather complex mechanic which was integrated
into the game flow is an explosive projectile that is activated by the player's audio
input. To the side here is an extremely basic demo of the game. Below is a breakdown of
the mechanics and code snippets.
Developed in Unreal Engine 5 for Android Mobile Devices using C++.
I am currently updating my projects so check back in a day or two, I will be updating
as I go. If you're really excited and can't wait to read more about my work then give me an email
and I will get back to you to elaborate on any of my projects! Thank you :)
Where you can reach me: calcifer1996@outlook.com
This is an audio project which focuses on how audio and ambience are represented in
a digital format and how humans navigate using audio stimuli. The Void demonstrates
various components that are utilised in games contexts. Watch the walkthrough for
the full effect.
This was written in C# and built in Unity Engine. There is a link at the
end if you would like to download the build and try it out yourself. Headphones
are almost a must for this.
I am currently updating my projects so check back in a day or two, I will be updating
as I go. If you're really excited and can't wait to read more about my work then give me an email
and I will get back to you to elaborate on any of my projects! Thank you :)
Where you can reach me: calcifer1996@outlook.com
Facial Recognition is becoming an increasingly important area of
development in various industries including healthcare, entertainment, and
marketing. If companies had the opportunity to capture and detect real-time emotions
of a consumer based image or video capture, then they would be able to make informed
decisions regarding the success of their products.
On a more interesting note, there are medical benefits such as being able to
detect and extract data from features corresponding to identifying various cancer
types and tumours.
Albeit working on cartoon character faces, this project was an interesting one to
develop. It focused on supervising the learning of a model using the OpenCV
library to identify the emotions present in facial imagery (otherwise known as image
classification). The efficacy of the project
was tested by occluding some of the image features to see how it affected the model
performance, while also drawing comparisons between the accuracy of utilising feature
engineering versus using a list of facial landmarks detected.
Please read on if this has piqued your interest. A link to the code and my report is included at the end.
This project was written using Python.
This project makes use of the SVM algorithm. It is a binary classification algorithm which aims to find an appropriate decision boundary (hyperplane) in a high-dimensional feature space that separates different classes of data points i.e. 'anger' and 'joy.' The ability to handle a large number of features and its effectiveness in binary classification made it a good fit for this project. The model was trained on extracted facial features and the corresponding emotion labels to learn the underlying patterns and make predictions on unseen data.
The model was trained using a set of images portraying cartoon faces of varying emotions. These include: anger, disgust, fear, joy, neutral, sadness, and surprise. To train the model properly the problem of overfitting had to be mitigated against. Overfitting is where the model is memorising the data rather than the underlying patterns or features which can lead to exaggerated accuracy outputs.
The image data was divided into 2 categories: Training and Validation.
Each set contained subfolders pertaining to each emotion previously
mentioned for 2 characters: Malcolm and Merry. The Training set was
used to train the model to learn the underlying patterns and gauge
what extracted data related to specific features located on the face.
When training the model, it would extract the data and compare it
to the emotion label. Essentially we were giving the answers to the
model.
Once the model had undergone the training, it was then
exposed to the Validation data to ensure that it in fact had learned
how to identify emotions based on the features extracted without
the labels. It was very successful keeping in mind these are cartoon
faces.
As can be seen in the image to the right above, the output showcases the locations of the facial structure which is used to determine landmark features such as the eyes, nose, upper lip and so on. Coordinating and extracting more data related to more features leads to higher accuracy when determining emotions as would be expected. To test this, the model underwent the same training and validation from scratch.
Overall this was an extremely challenging yet interesting project to have worked on
and can get rather detailed. If you're still interested and want to have a read
then feel free to read my report. It's nothing fancy but will showcase
my process.
I have also attached my project if you want to have a read of
my code.
There are a few dependencies that are required to get it running but if
there's interest then give the Run-through doc try. I go through the code and how to run it.
Being the first time using Unreal, I wanted to explore the various
functionality presented to the user. This system consists of several components and is
a decent example of a simple power ability that allows the player to interact with
enemies. The components include a power ability, stamina gauge, and 8-directional movement complete with animations.
This was written in C++ and built in Unreal Engine 5. There is a link at the
end if you would like to have a look at the code and a quick demo to the left.
I am currently updating my projects so check back in a day or two, I will be updating
as I go. If you're really excited and can't wait to read more about my work then give me an email
and I will get back to you to elaborate on any of my projects! Thank you :)
Where you can reach me: calcifer1996@outlook.com
This project explores the concept of DDA (Dynamic Difficulty Adjustment). Specifically,
the game world will respond to the player by altering parameters resulting in an increase
or decrease in difficulty. To achieve this, a reward system was programmed using MLAgents
in Unity Engine so an agent could be taught how to navigate an obstacle course. The player
would then race against the agent who would tailor itself to the performance of the player.
This project utilises an open source project which presents developers with a way to train agents using Deep Reinforcement Learning. I used the behavioural component to create and train a Neural Network to do simple tasks such as move from A to B and avoid obstacles.
To train the agent, a reward system had to be integrated. Albeit simple, it became apparent that by simply rewarding and punishing the agent using arbitrary values, it could do some interesting things. However, by only adding and subtracting reward values, the agent could unlearn specific behaviours. (See Greedy Epsilon)
This method is simple to implement but gives optimal results fast. Using random forced values,
the agent could be quickly trained into exploring its immediate environment which would result in
risks for high reward or punishment. Using this method would stop the agent from becoming stationary
or unlearning the behaviour of moving towards the goal. If the agent realises that staying still or
moving slightly towards the goal, then its reward could be maximised.
In accompaniment with the
Greedy Epsilon method, I implemented the functionality where if the agent stays put it doesn't get
rewards and if it moves away from the goal then it gets a lesser reward. All of this together
quickly motivated the agent to learn how to move properly to the goal, even if it risks a loss.
The agent can interact with the player by receiving information about them. As the player races with the agent, their performance is being monitored. This ensures that the agent can be tailored to the player ability level to maintain a challenging yet fair experience. The agent will respond to the player by slowing down or becoming less intelligent and making mistakes.
This premise of the agent adapting to the player is great because it doesn't only need to be employed in games such as Hellblade: Senua's Sacrifice where enemies respond to the player abilities. Educational platforms that aim to teach skills and subjects to a wide range of people could integrate adaptable agents that can dynamically change the level of difficulty based the learner's level of knowledge or expertise.
If you would like to try your luck against the agent then you can download
the git repository using the link to the right. Keep in mind this is not a polished
gaming experience. It is an exploration of difficulty in games and how training an agent
can have an impact on the experience.
The repository is approximately 3Gb. I have included the build of the application so
you can quickly run it to play. I also included the full project files for Unity.
To run the build navigate to DDA->ProjectBuild2. Double click on DDA_Project and
you should be set to play. WASD keys are to move, and the mouse to move camera.
To open the project in Unity, navigate to DDA->DDA_Project from Unity Hub. Make sure
to have Unity version 2022.2.2f1 installed. The code is all included so you could view this on
GitHub without downloading if you so wish.
Email me if any problems or if you would simply like to be in touch. Make sure to connect
on LinkedIn.
Al-Munya (Cordoba Journey) is an historical prototype exploration game where the player is
solving environmental puzzles while seeking for medieval Islamic artefacts from 12th Century
Cordoba.
As the Team-Coordinator of a multi-disciplinary team, I collaborated with an industry
partner at Edinburgh University (Digital Lab for Islamic Culture & Collections) to create an
educational prototype that would appeal to a global audience who have a love or curiosity
about history.
This is a trailer to the game but feel free to download the executable to have
an explore and see what we worked on. This project was showcased at ReSIA (Research
Seminar in Islamic Art) to continue illustrating the point that games are fundamental to
the learning of today and that it is an exceptional way to explore humanity's past.
As a Team Coordinator, I was involved in many aspects of development, including:
Have a go at the prototype! There is a link below to all of the source code but also another link to the build where you can play through and explore what we made in such a short time from scratch (4 months).
The project was completed in Unity using C#.
There is a ReadMe file which instructs on how to run the game. Simple put, download the repository and double click the Islamic_Villa_Munya.exe file.
Being the first game I made by myself, it features the beginnings of me
showcasing my ability using C++ and the SFML library to create a game. The process involved
manipulating 2D graphics and spritesheets as well as creating simple yet enjoyable game
mechanics.
There is a link at the end to my GitHub repository so feel free and download the build to give it a go!
Download the source code to have a look at my scripts or simply download the build of the
game to play. It is short and sweet!
This game was created using SFML and C++ in Visual Studio.
© 2025 Cal Gillies' Portfolio. All rights reserved.