Spread the love

It was wonderful to get to see my colleagues from years ago, from my work at Microsoft, again. Similarly, to catch up with my friends from Geekulcha, it’s always great to attend one of their hackathons. I’ve been meaning to post this since Friday the 15th, oops.

For safety, I had to use my ancient laptop on the event day, at the venue. I know their events are super safe, but I’ll always have the paranoid mindset. The unfortunate result, I could not show it off working to the judges. It’s alright, I’ll likely have better hardware next time to compete properly. I’m proud of the starting step being usable and ready, in under 3 days, where my teammate didn’t do much.

The key feature that they shared was the Copilot Workspace, which they shared they’d give to attendees for a year for free. I will do another post, in a day or two, alongside my NaNoWriMo-esque month, to show off Copilot Workspace.

Copilot Workspace

Copilot workspace will be a fabulous tool to hastily get prototype projects together, from scratch, then also fine-tuning code projects to the release status. You can see videos, and gifs, of it in action on the GitHub Next page.

My struggle, with my disability, is remembering names of things. Over time, I have managed to build the memories of people’s names again. Functions and classes in code are named entities. I’m slowly getting traction in memory by using LM Studio with a few models I swap between (Qwen 2.5, Llama 3.2, some larger code-oriented models). It has been tremendous for bringing my extensive know-how, over the years, back to scratch so I can code big solutions faster. Hence, I’m looking forward to when I can play with the workspace.

Copilot in VSCode, choosing models like 4o

The key interest I have in the next copilot stages, has been through early access with the mixed models approach for copilot. For code, specifically, through VSCode. Swapping between the 4 models available, has been tremendous for working out better ways to code, with more data safety-oriented solutions. It helped put together my full plan for the hackathon.

As for the hackathon, there was 1 goal, create an app that helps people with copilot. I tried to join a team, didn’t have much luck, so fell back to the ideas I suggested as possibilities to others. You’ll see on ActiveRecoveryAssist, which is super pre-alpha status, in the commits, I did it mostly solo. My teammate was a nice person, just not ready to do full-stack dev. In around 2 to 2.5 days of a few odd hours of work, 2 hours odd each time, I got the proof of concept shown in the image below. The last step getting it all working together was unfortunately just after the judges voted on which of the projects could be shown off to find the winner. I’ll be doing way more solo dev next time I’m invited to an event like this.

I ask other devs that join teams in hackathons to please install what you’re asked to, like Visual Studio 2022 with 3 features selected, like asked before an event. It would be useful to finalise the project and compete properly before, but I did try to join other teams, yet everyone just went dead quiet. Yes, my personal capable laptop is no longer usable, and I personally never want to use my work laptop at public events if I can avoid it. I’ll just do more next time, myself, from day 1, instead of trying to join a team to help others.

On the evening of the 12th, Tuesday, I gave up and started. First was the ipynb for the model. Then in the evening of the 13th, and morning of the 14th, I got everything in place as much as I could. I accidentally didn’t save the compiled APK where I could access it remote, my personal laptop struggles to build APKs, so we had nothing to show and make a video of. I had other issues, during the event, where I had to reinstall a ton of things, and got it almost to feature-ready again, when we had to submit videos.

It took only 3 small commits on the 15th, and I called it quits. Still needs adjustments, and fixes, but wasn’t fit in the hackathon time or I could have shown a more robust solution, like planned. I like to help fellow hackathon attendees learn more when I can. I’m just sad I thought I could join another team, next time will just dive in solo, for the first time in ages.

Using Qwen/Qwen2.5-1.5B, without training, only added prompting

The questions and answers to test have been decent. It took swapping between several models, and finding a smaller model which can be trained even more, but it works to prove the concept with the

Not that you can tell, this wasn’t using my WSL2 with Cuda enabled. I didn’t feel like fixing my one pip install requirement for this blog post, which gave me errors on the 03 ipynb, which I use for training.

With all that being shared, was mostly a super fun experience to get to know the new features for development through Copilot! I’ve also got the full stack proof-of-concept together for all my personal projects to run a model somewhere with my own methods to send messages to it with my own style of prompt magic. I’ll just hang in there while working out how to get my better hardware again.