AI has taken the software development world by storm. Spend any time on Twitter/X, and you’ll see opinions ranging from “AI is ruining coding” to “AI is the future and will replace developers entirely.” The reality, as always, lies somewhere in the middle.
While some believe AI will render developers obsolete, others think it's detrimental to the craft of coding. But perhaps the truth is more nuanced than that.
I first became fascinated with AI technology when I stumbled upon “Jarvis” (now “Jasper”) back in early 2021. At the time, I was building websites with the then-beta Nuxt 3, and the hardest part for me was adding copy to a website. Copywriting wasn’t my strong suit, so when I saw that AI could fill that gap, I was hooked. It became an obsession.
I started hunting for tools that could do the same for me in code, paying close attention to the AI space. One of the first AI-powered tools I tried was TabNine. I was a bit disappointed with it. I tried a few more and eventually got onto GitHub Copilot, but the release of ChatGPT was a game-changer.
The first thing I had it help me build was a Datepicker component in Vue. It wasn’t perfect, but it did the job at the time and it was something I wouldn’t have attempted on my own before. For me, this was magic. This was what I was looking for; though somewhat primitive, I could see the potential and the future, and it was AI.
Over the past two years, I’ve been perfecting how to integrate AI into my coding workflow and learning the best ways to integrate AI into business processes and applications. Along the way, I’ve seen many posts from people saying, “AI is trash,” or “Look at this code AI gave me; it’s no good at all.” I kept thinking to myself: You just don’t know how to use it correctly.
In this blog post, I'll share my tried-and-tested workflow for integrating AI into your coding process, helping you unlock its full potential. Whether you're a seasoned developer or just starting out, these insights will guide you in using the power of AI effectively.
I see AI like a trusted guide on a hike: it helps me find the path when I’m unsure, filling in gaps and pointing me in the right direction. But I still need to know the basics—how to walk the trail and climb the hills.
While this post won’t focus on tools like Copilot, we’ll first explore the basics of how LLMs work, then how to properly prompt them, and finally some common pitfalls to help you effectively use AI in your Vue coding workflow. Although the focus is on Vue, these principles can be applied to any coding environment.
Now that we understand the role AI can play as a supportive guide, let's discuss when and why to use AI in your workflow, and how to strike the right balance between AI assistance and manual coding.
When I’m building something, I usually have a clear idea of what I want to achieve. Whether it’s adding a new feature or building a component, having a clear vision from the start is crucial. AI can be incredibly helpful in these situations, but it works best when you have a defined plan.
AI is most effective when you already know what you're building or adding to a project. If you're less certain or just brainstorming, AI can still be useful—especially for generating ideas or researching possible directions. Just like humans, LLMs require context. If they lack the right information, they will try to infer what you want, often leading to poor quality outputs and user frustration.
AI is great for providing quick answers or code snippets when you know exactly what you need. However, if you’re inexperienced, relying too much on AI can be risky. When you don’t fully understand the generated code, it can lead to a messy and unmanageable codebase.
In my own Python learning journey, AI was crucial in helping me grasp the basics. When I needed a quick answer or a snippet to solve a small problem, AI was excellent. However, relying too much on AI when dealing with more complex problems often caused frustration. Bugs, outdated syntax, and other issues that linters couldn’t catch led me to spend more time debugging than learning.
AI-generated code can sometimes be convoluted, filled with unnecessary try blocks, custom error handling, or redundant checks. This experience taught me that while AI can help accelerate learning, it’s essential to balance AI use with traditional coding practice to truly master a language.
AI can be a powerful learning tool, but you need to actively engage with it. For example, if AI writes composable functions for your Vue project that you’re unfamiliar with, take a moment to ask the AI to explain them: when to use them, where to use them, and why. This approach allows you to gain deeper insight quickly.
I’m not suggesting avoiding AI altogether while learning—just use it strategically. Once you grasp the fundamentals, it becomes easier to leverage AI effectively, and it transitions from being a crutch to being a valuable assistant.
Now that we've discussed when and why to use AI, let’s explore how LLMs actually work. Understanding their mechanics will help you make the most out of AI while avoiding common pitfalls.
To use AI effectively, especially in coding, it’s crucial to understand how Large Language Models (LLMs) work. At their core, LLMs are sophisticated token prediction machines. They predict the next word or token based on the context you provide. This is an important point that not many people focus on, but context makes all the difference in the quality of outputs that you are going to get.
To better understand why prompting and context are important, let's use the following example:
Imagine you’re steering a boat through a canal. The canal represents all the possible responses the AI can generate based on your prompt. If your prompts are clear and specific, you’re keeping the boat centered, narrowing the range of possible paths the AI might take next. But if your prompt is vague, the canal widens, and the boat can drift toward less relevant or incorrect paths.
This is how an LLM works—it predicts the next token based on probabilities. The more specific and context-rich your prompt, the narrower the range of possible tokens, which improves the quality of the output. If your instructions are unclear or ambiguous, the model has more "freedom" to choose from a wide range of possibilities, increasing the chances of an irrelevant or inaccurate response.
This also explains why LLMs can "go off the rails" if the generated output starts to stray from the correct path. Once the wrong path is taken, the model keeps predicting based on the new (wrong) context, often resulting in a spiral of bad code. The solution is to reset with a fresh context—clear the slate and steer the boat back into the center of the canal with a precise, well-structured prompt.
To make the concept of LLMs as token predictors even clearer, think of it like autocomplete on your phone. When you start typing a sentence, your phone predicts the next word based on what you've typed so far. If you type “I’m going to the,” your phone might predict “store” or “park” because those are common next words. But if you type "I’m going to the under,” your phone gets confused, offering strange completions because it's less sure of what you're trying to say.
LLMs operate on a much grander scale, predicting not just words but smaller units of meaning, called tokens, that make up text. When you give an AI a prompt, it starts narrowing down all possible responses by predicting each token one at a time, just like autocomplete. But if your prompt is vague or lacks context, the model has more room to make guesses, increasing the chance of irrelevant or incorrect predictions.
Think of the outputs from an LLM as a reflection of your knowledge. The less you know about a subject, the harder it is to get good responses. If you’ve never written Vue code before and you naively ask for a random component, it may generate code for use with a script tag, or it may give you Vue 2, Options API, or some other syntax that you don’t want in your project. Without foundational knowledge, “garbage in equals garbage out.”
Having a fundamental understanding of how an LLM works will go a long way in getting better outputs. While I’m not going to dive deep into that in this post, I highly recommend watching some videos by Andrej Karpathy on how LLMs work. His teaching style is fantastic, and you’ll certainly become better at utilizing AI after watching some of his content.
Now that we understand the basics of how LLMs function and their strengths and limitations, let's dive into how we can effectively integrate AI into our coding workflow. By following a structured approach, we can maximize the benefits of AI while avoiding common pitfalls.
When it comes to prompting, you need to be clear in what you want. Even now, with the most advanced models, if you are ambiguous or not clear about what you are looking for, your outputs are going to be subpar. The more you understand how to better prompt the model, the better your outputs are going to be.
The prompt is vital to getting usable outputs when you are using the AI as an assistant tool while you are coding or for any kind of really good outputs. Even the frontier models are going to need a good prompt.
Another key factor is a system prompt. This is information that is always sent to the model on every request, and in theory, it holds a bit more weight than the user prompt and sets the stage for the output you receive. A system prompt is the custom instructions in ChatGPT and is not always accessible in all tools.
Now let's get into the practicality of all of this with some examples.
Here is an example of a bad user prompt with no system prompt using gpt-4o-08-06-2024
Generate me a Button for my Vue project.
Here is the result from that prompt:
To create a button component for your Vue.js project, you can define a simple Vue component. Below is an example of how you might create a reusable button component:
First, create a new file for your button component, for example, MyButton.vue
.
<template>
<button :class="buttonClass" @click="handleClick">
<slot></slot>
</button>
</template>
<script>
export default {
name: 'MyButton',
props: {
type: {
type: String,
default: 'button',
},
variant: {
type: String,
default: 'primary',
},
disabled: {
type: Boolean,
default: false,
},
},
computed: {
buttonClass() {
return {
'btn': true,
[`btn-${this.variant}`]: true,
'disabled': this.disabled,
};
},
},
methods: {
handleClick(event) {
if (!this.disabled) {
this.$emit('click', event);
}
},
},
};
</script>
<style scoped>
.btn {
padding: 10px 20px;
border: none;
border-radius: 4px;
cursor: pointer;
font-size: 16px;
transition: background-color 0.3s;
}
.btn-primary {
background-color: #007bff;
color: white;
}
.btn-secondary {
background-color: #6c757d;
color: white;
}
.btn-success {
background-color: #28a745;
color: white;
}
.btn-danger {
background-color: #dc3545;
color: white;
}
.btn:disabled,
.btn.disabled {
background-color: #cccccc;
cursor: not-allowed;
}
</style>
You can now use this button component in your Vue application. Import it into the parent component where you want to use it.
<template>
<div>
<MyButton variant="primary" @click="handleButtonClick">Primary Button</MyButton>
<MyButton variant="secondary" @click="handleButtonClick">Secondary Button</MyButton>
<MyButton variant="success" @click="handleButtonClick">Success Button</MyButton>
<MyButton variant="danger" disabled @click="handleButtonClick">Disabled Button</MyButton>
</div>
</template>
<script>
import MyButton from './MyButton.vue';
export default {
components: {
MyButton,
},
methods: {
handleButtonClick() {
alert('Button clicked!');
},
},
};
</script>
MyButton
component accepts type
, variant
, and disabled
as props. The variant
prop is used to determine the button's style.<slot></slot>
allows you to pass content (like text or icons) into the button.handleClick
method emits a click
event unless the button is disabled.This setup allows you to have a flexible and reusable button component in your Vue.js application.
If we look closely at this code, we will notice that it is using the Options API. You may think it should be using the latest and greatest, but the model has been exposed to tons of Options API Vue code and will naturally generate that code style.
Does this mean that the model is no good or that it can't be helpful? No, it doesn’t. It means the user didn’t supply enough context to the model.
If you asked someone to get you some soda from the store and they brought you Coke, but you really wanted Pepsi, should you get mad at the delivery person because they brought you Coke? No, you shouldn’t because you asked for soda, not Pepsi.
The solution to these kinds of problems is to include clear instructions in the system prompt, specifying exactly what you want. Recently OpenAI released a meta prompt in their docs. A meta prompt is a prompt or set of prompts used to generate your final prompt. The meta prompt comes in handy because it takes a lot of the work out of structuring your prompt. You still need to give it your task and list of requirements but it will generate you something like this next prompt.
Now let's look at another example with a better prompt that attempts to solve the issue of generating older code styles.
Generate the Vue component requested using the script setup syntax and TypeScript with the Composition API. Include default values using withDefaults
and define interfaces for the props. Place the script block at the top and use Tailwind CSS for styling. Favor the use of props over slots, and use ref
over reactive
in accordance with Vue's latest best practices.
withDefaults
to set default values.ref
over reactive
.<template>
section.A complete Vue component with the script block at the top, using TypeScript, the Composition API, and styling with Tailwind CSS. Ensure the component layout is clear and follows best practices for readability and maintainability.
Example Input:
A button component with a label and an emitting event.
label
(string), type
(string, default 'button'), disabled
(boolean, default false)Example Output:
<script setup lang="ts">
import { withDefaults, ref } from 'vue';
interface ButtonProps {
label: string;
type?: string;
disabled?: boolean;
}
const props = withDefaults(defineProps<ButtonProps>(), {
type: 'button',
disabled: false,
});
const handleClick = () => {
emit('click');
};
</script>
<template>
<button :class="['btn', 'btn-primary']" :type="type" :disabled="disabled" @click="handleClick">
{{ label }}
</button>
</template>
<style scoped>
/* Add any additional CSS styles here if necessary */
</style>
This system prompt has a clear task for the LLM and will steer it in the right direction. It also specifies in text and with examples the style of code the user is looking for. This prompt is also generic enough that it could be added either to the system prompt or at the top of the prompt on each Vue component generation. Now lets take a look at how this prompt performs compared to the first prompt.
Here is the user prompt and the generated component using gpt-4o-08-06-2024:
Create me a reusable button component that has a default, outline, ghost, and link variant along with a color prop, a slot for adding icons, a loading state with a spinner that changes to router-link when you add the
to
property.
<script setup lang="ts">
import { defineProps, withDefaults, computed, ref } from 'vue';
import { RouterLink } from 'vue-router';
interface ButtonProps {
variant?: 'default' | 'outline' | 'ghost' | 'link';
color?: 'primary' | 'secondary' | 'success' | 'danger';
loading?: boolean;
to?: string;
}
const props = withDefaults(defineProps<ButtonProps>(), {
variant: 'default',
color: 'primary',
loading: false,
});
const buttonClasses = computed(() => {
const baseClasses = 'inline-flex items-center justify-center font-medium rounded-md transition';
const variantClasses = {
default: `bg-${props.color}-500 text-white hover:bg-${props.color}-600`,
outline: `border border-${props.color}-500 text-${props.color}-500 hover:bg-${props.color}-100`,
ghost: `text-${props.color}-500 hover:bg-${props.color}-100`,
link: `text-${props.color}-500 underline hover:text-${props.color}-600`,
};
const loadingClasses = props.loading ? 'opacity-50 cursor-not-allowed' : '';
return `${baseClasses} ${variantClasses[props.variant]} ${loadingClasses}`;
});
</script>
<template>
<component
:is="to ? RouterLink : 'button'"
:to="to"
:class="buttonClasses"
:disabled="loading"
>
<slot name="icon" />
<span v-if="loading" class="loader mr-2"></span>
<slot />
</component>
</template>
<style scoped>
.loader {
border: 2px solid transparent;
border-top-color: currentColor;
border-radius: 50%;
width: 1em;
height: 1em;
animation: spin 0.75s linear infinite;
}
@keyframes spin {
to {
transform: rotate(360deg);
}
}
</style>
default
, outline
, ghost
, and link
variants.loading
is true, a spinner is shown, and the button is disabled.to
prop is provided, the button becomes a router-link
.This code now has the proper code style we wanted and it has more features on the button component. This example shows the importance of specificity in the prompt.
Remember, LLMs are good at generating code but not so great at troubleshooting it. Often, you’ll find small mistakes—syntax errors or unnecessary wrappers—that can slip through. Tools like linters can catch these issues, and AI models are improving at fixing them as well.
When an LLM starts producing code with excessive if checks or convoluted logic, it might be a sign that it’s taken a wrong turn. This happens because once the model starts down the wrong path, every subsequent prediction is based on that initial mistake.
A common tactic I’ve found useful is to refresh the context. If you sense the model has strayed, it’s often best to stop, reset, and begin with a fresh prompt rather than continuing down the wrong path.
By understanding how to effectively prompt AI and knowing the strengths and limitations of LLMs, you can significantly enhance your coding workflow. This structured approach helps you maintain control over the development process while benefiting from AI's capabilities.
In Part 2, we'll explore specific AI tools and how to seamlessly integrate them into various stages of your project. We'll discuss which tools excel at different tasks and how to choose the right one for each phase of development. Stay tuned for practical insights on optimizing your AI-enhanced coding workflow!
Our goal is to be the number one source of Vue.js knowledge for all skill levels. We offer the knowledge of our industry leaders through awesome video courses for a ridiculously low price.
More than 200.000 users have already joined us. You are welcome too!
© All rights reserved. Made with ❤️ by BitterBrains, Inc.