LLM code assistance#
We will setup LLM code assistance in our editor so that the LLM can interact with our code directly. You are probably already using an LLM agent and LLM integration in our editor will ease the interaction and save you copy & paste time.
For using LLMs directly in our editor, we have two options:
Running a local LLM on our computer
Using cloud services from companies like Mistral, OpenAI, Anthropic, etc.
Without a powerful graphics card or neural-network accelerator, the first option will probably be very slow. On the other hand most cloud services are not free to use including OpenAI. Even ChatGPT is free to use, using it as a remote cloud application, i.e., using their API (application programming interface), is not free.
Groq, an American AI company, has a free tier, which allows about thousand requests per day for some models, e.g., openai/gpt-oss-120
, which should be sufficient for an educational setting. If you run out of credits, you can try another agent or use a free to use the LLM that you have been already using through your web browser, e.g., Mistral, ChatGPT, etc.
Another alternative is GitHub Copilot. GitHub Copilot can be used for education for free after you apply for a license in GitHub Education. Note that Copilot extension may only be installed on VS Code and not in VSCodium, so you have to shift to VS Code if you go for Copilot.
In this tutorial, I used a plugin called Continue. It lets you connect to lots of different LLM providers, which makes the whole setup feel more open and flexible. The disadvantage is that it might not be as reliable as the official AI assistant GitHub Copilot.
Tip
If you don’t like the user experience of Continue, then try GitHub Copilot extension. I did not have spent enough time to compare Copilot and Continue.
Let us first get an API key from Groq and then use it in the Code LLM extension Continue.
Creating a Groq API key#
Go to https://console.groq.com.
Create an account. You will be logged in to the dashboard after creating an account.
Click on
Create API key
. You will immediately be shown an API key.Copy the API key.
Installation of Continue#
In Code, install the extension Continue - open-source AI code assistant. It may take about 1 minute.
After the installation, you will see a settings icon
right to the
Auto Update
on the tab of the extension and also on to the Continue app in the extension listing.You will also notice the new Continue icon on the activity bar.
Configuration of Continue#
First Move the Continue icon from the first step to the secondary sidebar on the right as shown here.
We will now configure the plugin.
If you want to disable sending usage data to Continue: (optional)
Click on one of the
icons that belong to Continue. A menu will pop up.
Click on
Settings
. Settings tab will open up and filter for the settings of Continue.(optional) Go to
Telemetry Enabled
and opt out telemetry if you desire.Close
Settings
.
Now we will configure which LLM we want to use:
On Continue window, click on
Or, configure your own models
. A configuration window will open up.Right below the
Connect
button, click onClick here to view more providers
.Add Chat Model
window will pop up.Select the provider
Groq
.Select in the model drop-down menu
Autodetect
.Paste the API key that you have here into the corresponding field.
Click
Connect
. The tutorial filecontinue_tutorial.py
and the text-based configuration fileconfig.yaml
will open up in new tabs. You can close these two files.Additionally
Models
window will be open, selectopenai/gpt-oss-120
forChat
,Apply
, andEdit
.Click on the
Models
to close the model settings.
Usage examples: explanation and formatting#
We are going to try the agent on our previous code. So click back to the tab with the source file you have written before – the pi estimator.
On the Continue tab, if the a subwindow called
Create Your Own Assistant
is open, you can close it.
We will ask the agent to explain the code for us. Before you do, take one minute at the code line by line and try to guess how the program creates the output. It is completely acceptable, if you don’t understand most of the lines. You will gradually improve. Continue after your try.
Write on the prompt field
explain line by line
and press AltEnter. Without Alt, Continue does not send your code to the agent.Compare your explanation with the agent’s.
Warning
I tried the following with another model. The output may be different for you.
Now write
format code
. LLM should reply with formatted code. I got the following:#include <stdio.h> #include <stdlib.h> #include <time.h> int main() { int total = 1000000; int inside = 0; srand(time(0)); for (int i = 0; i < total; i++) { double x = (double)rand() / RAND_MAX; double y = (double)rand() / RAND_MAX; if (x * x + y * y <= 1) inside++; } double pi = 4.0 * inside / total; printf("Estimated pi: %f\n", pi); return 0; }
Even I prompted format code, the model added also
return 0;
line, which is not required, but can be part of an explicit code style. When I tried another model, llama-4-maverick, it did not includereturn 0
. So pay attention that you review the changes block by block and ask different models in doubt.You notice that formatting looks different than the language server we used. LLM is able to recognize blocks that logically belong together and insert newlines accordingly, which is more advanced compared to the clangd formatter we used. Nevertheless, clangd formatter should be just fine for daily work. I just used this example to showcase an LLM’s difference to a formatter.
To apply the changes: On the right top corner of the code reply, click on
Apply
.Accept | Reject
blocks will appear in your code.As recommended before, accept or reject block by block – especially if you are in the process of getting to know a model.
You probably already know that ChatGPT can keep a memory of your preferences and you can customize ChatGPT with custom instructions like prefer being direct over being nice when giving feedback. You can also customize your code assistant’s traits. How see how to do that in Continue, click here for details.
LLM-based inline code suggestions can affect your learning negatively#
One of the learning goals of this course is to identify and explain core programming concepts without an LLM to be able to criticize the output of a LLM later when you become more proficient. For learning programming concepts, you have to face some difficulty and not use an autopilot. My recommendations to reach the learning goals of this course are:
First write the programs yourself and ask the LLM as a second step only for getting feedback, improvements or explanations.
If the extension you use provides code suggestions while you write code – called inline code suggestions, deactivate this feature. Compared to typical code completion by the IDE, LLM-based suggestions are long and detailed. This feature can be too much help for learning in the beginning. After you become confident, e.g., you are able to write programs yourself, you can carefully activate this feature again.
As I heard, GitHub Copilot uses inline code suggestions in form of ghost text as default. Search for copilot disable inline suggestions
to disable it.
Continue’s inline suggestions work only with particular LLMs. In our configuration, they are not activated. Continue suggests code changes only after asking in the chat window, that must be applied with mouse clicks. First try to understand these changes, and then type them manually until you become more confident.