Spread the love

Recently I’ve pressed the ‘reset button‘ in my plans for my future. I’ve seen, and heard, a ton of interesting things which has led to me needing this recent break.

Without dwelling on how I’ve started to defend myself, finally, I’d rather focus on the positives. The short and sweet version is that I’m improving myself for the future again. Part of that happens to be registering for new courses, to get certifications past my matric, and part of it is feeling better about my personal projects again.

Today’s Goal

What I chose for step 1, happens to be something minimal. My one course, to feel better in general, is giving me the crash course in Godot 4.3. I used to use Unity, for my games, even swapped to MonoGame to try reliving the XNA days I started around, and even got to rendering without textures with Vulkan from scratch. I always want to “create more” and “go deeper” into every stack. I decided, amongst the other changes, that’s what I’m changing.

As part of my rules, I’ll finally stick to a single project for a while. For this reason, with the first course – Complete Godot 3D: Develop Your Own 3D Games Using Godot 4, I’m closing in on the first game of 3 as a refresher. While I will post the game when I finish the first section, I also need fun arbitrary project here to get back to regular posts.

Introducing the research step for my GPT model I’m starting to create and train: Godot 4.3 GPT2. I had to revamp my plans since I had hardware issues, but after the required fixes, I decided this morning to dive into my own training. To keep it super minimal I decided to not finish off my investigation in using Llama 3.2, but rather step back and use a smaller model for starters.

Using 311_fine_tuning_GPT2.ipynb I adjusted the minimal steps to fit my use case, and as per usual, my training runs offline. While slower, it’s wonderful to experiment with my own hardware.

Today’s training goal took around 4h 55m to take the text from Godot 4.3 documentation (links and info below) to train the model. I used a few sample ideas I got from Copilot, and a few I thought of myself, as the testing questions. While this doesn’t describe a lot, below is my ipynb output, and at the end I share my reviews and thoughts. The source shared is cleaner, and results may vary depending on your computer’s hardware.

The final result, while not great at all, is on HuggingFace: edg3/godot-4.3-gpt2. My intention is to take it further, in the near future, with better training. I’m just comfortable with the progress so far as I ran in slightly blind.

01.train-gpt2 after mid-way

Steps

You will see, on the first experiment (01.train-gpt2.ipynb), I feel I could give tiny points for some of the answers. It wasn’t that they were accurate to the questions, rather the small points were from the training data the model learned. It went from a messy version to a slightly cleaner version, to just jumping into a manual version to achieve the best results as a proof of concept.

I was happy with the proof of concepts tiny training data style answers, so I decided I need to work out a better training dataset. Moving from 01.train-gpt2.ipynb onto 02 (shown below) it happened to see vast improvements, as well as way more excessive hallucinations, though I believe it’s alright. I tried to keep the structure in a readable, understandable, manner.

Mostly using GPU for training

Note: I removed the warnings of things being deprecated, and not runnable on my system, as you wouldn’t need them to see what was done for this experiment.

So as you can tell from the experiment, so far, I have an interesting idea that could lead to an interesting game development GPT2 for Godot 4.3. I’ll likely do more training soon, when I work out the smaller kinks in the idea, and possibly move it to a more versatile GPT model.