Home / Blog / Unlocking the Power of AI in Your Vue Coding Workflow
Unlocking the Power of AI in Your Vue Coding Workflow

Unlocking the Power of AI in Your Vue Coding Workflow

David Robertson
David Robertson
October 28th 2024

AI's Growing Influence in Software Development

AI has taken the software development world by storm. Spend any time on Twitter/X, and you’ll see opinions ranging from “AI is ruining coding” to “AI is the future and will replace developers entirely.” The reality, as always, lies somewhere in the middle.

The Polarizing Debate Around AI

While some believe AI will render developers obsolete, others think it's detrimental to the craft of coding. But perhaps the truth is more nuanced than that.

My Journey Into AI and Coding

Discovering AI Tools for Copywriting

I first became fascinated with AI technology when I stumbled upon “Jarvis” (now “Jasper”) back in early 2021. At the time, I was building websites with the then-beta Nuxt 3, and the hardest part for me was adding copy to a website. Copywriting wasn’t my strong suit, so when I saw that AI could fill that gap, I was hooked. It became an obsession.

Exploring AI Tools for Code

From TabNine to GitHub Copilot

I started hunting for tools that could do the same for me in code, paying close attention to the AI space. One of the first AI-powered tools I tried was TabNine. I was a bit disappointed with it. I tried a few more and eventually got onto GitHub Copilot, but the release of ChatGPT was a game-changer.

The ChatGPT Revolution

The first thing I had it help me build was a Datepicker component in Vue. It wasn’t perfect, but it did the job at the time and it was something I wouldn’t have attempted on my own before. For me, this was magic. This was what I was looking for; though somewhat primitive, I could see the potential and the future, and it was AI.

Perfecting AI Integration in My Workflow

Overcoming Skepticism and Misconceptions

Over the past two years, I’ve been perfecting how to integrate AI into my coding workflow and learning the best ways to integrate AI into business processes and applications. Along the way, I’ve seen many posts from people saying, “AI is trash,” or “Look at this code AI gave me; it’s no good at all.” I kept thinking to myself: You just don’t know how to use it correctly.

What This Blog Post Will Cover

In this blog post, I'll share my tried-and-tested workflow for integrating AI into your coding process, helping you unlock its full potential. Whether you're a seasoned developer or just starting out, these insights will guide you in using the power of AI effectively.

Table of Contents

  1. The Role of AI as a Coding Companion
  2. When and Why to Use AI in Your Workflow
  3. AI vs. Traditional Coding
  4. Understanding LLMs: How AI "Thinks"
  5. Mastering Effective Prompts
  6. Avoiding Pitfalls
  7. Additional Tips and Improvements
  8. Conclusion

The Role of AI as a Coding Companion

AI as Your Trusted Guide

I see AI like a trusted guide on a hike: it helps me find the path when I’m unsure, filling in gaps and pointing me in the right direction. But I still need to know the basics—how to walk the trail and climb the hills.

Foundations Before Tools

While this post won’t focus on tools like Copilot, we’ll first explore the basics of how LLMs work, then how to properly prompt them, and finally some common pitfalls to help you effectively use AI in your Vue coding workflow. Although the focus is on Vue, these principles can be applied to any coding environment.

Now that we understand the role AI can play as a supportive guide, let's discuss when and why to use AI in your workflow, and how to strike the right balance between AI assistance and manual coding.

When and Why to Use AI in Your Workflow

Knowing What You Want to Build

When I’m building something, I usually have a clear idea of what I want to achieve. Whether it’s adding a new feature or building a component, having a clear vision from the start is crucial. AI can be incredibly helpful in these situations, but it works best when you have a defined plan.

AI is most effective when you already know what you're building or adding to a project. If you're less certain or just brainstorming, AI can still be useful—especially for generating ideas or researching possible directions. Just like humans, LLMs require context. If they lack the right information, they will try to infer what you want, often leading to poor quality outputs and user frustration.

Leveraging AI for Quick Solutions

AI is great for providing quick answers or code snippets when you know exactly what you need. However, if you’re inexperienced, relying too much on AI can be risky. When you don’t fully understand the generated code, it can lead to a messy and unmanageable codebase.

The Balance Between AI Assistance and Manual Coding

In my own Python learning journey, AI was crucial in helping me grasp the basics. When I needed a quick answer or a snippet to solve a small problem, AI was excellent. However, relying too much on AI when dealing with more complex problems often caused frustration. Bugs, outdated syntax, and other issues that linters couldn’t catch led me to spend more time debugging than learning.

AI-generated code can sometimes be convoluted, filled with unnecessary try blocks, custom error handling, or redundant checks. This experience taught me that while AI can help accelerate learning, it’s essential to balance AI use with traditional coding practice to truly master a language.

Using AI to Deepen Understanding

AI can be a powerful learning tool, but you need to actively engage with it. For example, if AI writes composable functions for your Vue project that you’re unfamiliar with, take a moment to ask the AI to explain them: when to use them, where to use them, and why. This approach allows you to gain deeper insight quickly.

I’m not suggesting avoiding AI altogether while learning—just use it strategically. Once you grasp the fundamentals, it becomes easier to leverage AI effectively, and it transitions from being a crutch to being a valuable assistant.

Now that we've discussed when and why to use AI, let’s explore how LLMs actually work. Understanding their mechanics will help you make the most out of AI while avoiding common pitfalls.

Understanding LLMs: How AI "Thinks"

The Importance of Context in AI Outputs

To use AI effectively, especially in coding, it’s crucial to understand how Large Language Models (LLMs) work. At their core, LLMs are sophisticated token prediction machines. They predict the next word or token based on the context you provide. This is an important point that not many people focus on, but context makes all the difference in the quality of outputs that you are going to get.

Steering the AI

Why You Need to Know How AI Works

To better understand why prompting and context are important, let's use the following example:

Imagine you’re steering a boat through a canal. The canal represents all the possible responses the AI can generate based on your prompt. If your prompts are clear and specific, you’re keeping the boat centered, narrowing the range of possible paths the AI might take next. But if your prompt is vague, the canal widens, and the boat can drift toward less relevant or incorrect paths.

This is how an LLM works—it predicts the next token based on probabilities. The more specific and context-rich your prompt, the narrower the range of possible tokens, which improves the quality of the output. If your instructions are unclear or ambiguous, the model has more "freedom" to choose from a wide range of possibilities, increasing the chances of an irrelevant or inaccurate response.

A Simple Example of Token Prediction

This also explains why LLMs can "go off the rails" if the generated output starts to stray from the correct path. Once the wrong path is taken, the model keeps predicting based on the new (wrong) context, often resulting in a spiral of bad code. The solution is to reset with a fresh context—clear the slate and steer the boat back into the center of the canal with a precise, well-structured prompt.

To make the concept of LLMs as token predictors even clearer, think of it like autocomplete on your phone. When you start typing a sentence, your phone predicts the next word based on what you've typed so far. If you type “I’m going to the,” your phone might predict “store” or “park” because those are common next words. But if you type "I’m going to the under,” your phone gets confused, offering strange completions because it's less sure of what you're trying to say.

LLMs operate on a much grander scale, predicting not just words but smaller units of meaning, called tokens, that make up text. When you give an AI a prompt, it starts narrowing down all possible responses by predicting each token one at a time, just like autocomplete. But if your prompt is vague or lacks context, the model has more room to make guesses, increasing the chance of irrelevant or incorrect predictions.

Garbage In, Garbage Out

The Importance of Foundational Knowledge

Think of the outputs from an LLM as a reflection of your knowledge. The less you know about a subject, the harder it is to get good responses. If you’ve never written Vue code before and you naively ask for a random component, it may generate code for use with a script tag, or it may give you Vue 2, Options API, or some other syntax that you don’t want in your project. Without foundational knowledge, “garbage in equals garbage out.”

Recommended Resources for Understanding LLMs

Having a fundamental understanding of how an LLM works will go a long way in getting better outputs. While I’m not going to dive deep into that in this post, I highly recommend watching some videos by Andrej Karpathy on how LLMs work. His teaching style is fantastic, and you’ll certainly become better at utilizing AI after watching some of his content.

Now that we understand the basics of how LLMs function and their strengths and limitations, let's dive into how we can effectively integrate AI into our coding workflow. By following a structured approach, we can maximize the benefits of AI while avoiding common pitfalls.

Mastering Effective Prompts

The Significance of Clear Prompts

When it comes to prompting, you need to be clear in what you want. Even now, with the most advanced models, if you are ambiguous or not clear about what you are looking for, your outputs are going to be subpar. The more you understand how to better prompt the model, the better your outputs are going to be.

The prompt is vital to getting usable outputs when you are using the AI as an assistant tool while you are coding or for any kind of really good outputs. Even the frontier models are going to need a good prompt.

Understanding System Prompts

Another key factor is a system prompt. This is information that is always sent to the model on every request, and in theory, it holds a bit more weight than the user prompt and sets the stage for the output you receive. A system prompt is the custom instructions in ChatGPT and is not always accessible in all tools.

Now let's get into the practicality of all of this with some examples.

Example of an Ineffective Prompt

Here is an example of a bad user prompt with no system prompt using gpt-4o-08-06-2024

Generate me a Button for my Vue project.

Here is the result from that prompt:

To create a button component for your Vue.js project, you can define a simple Vue component. Below is an example of how you might create a reusable button component:

### Step 1: Create the Button Component

First, create a new file for your button component, for example, `MyButton.vue`.

```js
<template>
  <button :class="buttonClass" @click="handleClick">
    <slot></slot>
  </button>
</template>

<script>
export default {
  name: 'MyButton',
  props: {
    type: {
      type: String,
      default: 'button',
    },
    variant: {
      type: String,
      default: 'primary',
    },
    disabled: {
      type: Boolean,
      default: false,
    },
  },
  computed: {
    buttonClass() {
      return {
        'btn': true,
        [<code>btn-${this.variant}</code>]: true,
        'disabled': this.disabled,
      };
    },
  },
  methods: {
    handleClick(event) {
      if (!this.disabled) {
        this.$emit('click', event);
      }
    },
  },
};
</script>

<style scoped>
.btn {
  padding: 10px 20px;
  border: none;
  border-radius: 4px;
  cursor: pointer;
  font-size: 16px;
  transition: background-color 0.3s;
}

.btn-primary {
  background-color: #007bff;
  color: white;
}

.btn-secondary {
  background-color: #6c757d;
  color: white;
}

.btn-success {
  background-color: #28a745;
  color: white;
}

.btn-danger {
  background-color: #dc3545;
  color: white;
}

.btn:disabled,
.btn.disabled {
  background-color: #cccccc;
  cursor: not-allowed;
}
</style></code></pre>
<h3>Step 2: Use the Button Component</h3>
<p>You can now use this button component in your Vue application. Import it into the parent component where you want to use it.</p>
<pre><code class="language-js"><template>
  <div>
    <MyButton variant="primary" @click="handleButtonClick">Primary Button</MyButton>
    <MyButton variant="secondary" @click="handleButtonClick">Secondary Button</MyButton>
    <MyButton variant="success" @click="handleButtonClick">Success Button</MyButton>
    <MyButton variant="danger" disabled @click="handleButtonClick">Disabled Button</MyButton>
  </div>
</template>

<script>
import MyButton from './MyButton.vue';

export default {
  components: {
    MyButton,
  },
  methods: {
    handleButtonClick() {
      alert('Button clicked!');
    },
  },
};
</script></code></pre>
<h3>Explanation</h3>
<ul>
<li><strong>Props</strong>: The `MyButton` component accepts `type`, `variant`, and `disabled` as props. The `variant` prop is used to determine the button&#039;s style.</li>
<li><strong>Slot</strong>: The `<slot></slot>` allows you to pass content (like text or icons) into the button.</li>
<li><strong>Event Handling</strong>: The `handleClick` method emits a `click` event unless the button is disabled.</li>
<li><strong>Styling</strong>: Basic styles are provided for different button variants. You can customize these styles as needed.</li>
</ul>
<p>This setup allows you to have a flexible and reusable button component in your Vue.js application.</p>
<pre><code>
If we look closely at this code, we will notice that it is using the Options API. You may think it should be using the latest and greatest, but the model has been exposed to tons of Options API Vue code and will naturally generate that code style.

### Misconceptions About AI's Limitations

Does this mean that the model is no good or that it can't be helpful? No, it doesn’t. It means the user didn’t supply enough context to the model.

If you asked someone to get you some soda from the store and they brought you Coke, but you really wanted Pepsi, should you get mad at the delivery person because they brought you Coke? No, you shouldn’t because you asked for soda, not Pepsi.

### Providing Clear Instructions

The solution to these kinds of problems is to include clear instructions in the system prompt, specifying exactly what you want.  Recently OpenAI released a [meta prompt](https://platform.openai.com/docs/guides/prompt-generation?context=text-out) in their docs.  A meta prompt is a prompt or set of prompts used to generate your final prompt.  The meta prompt comes in handy because it takes a lot of the work out of structuring your prompt.  You still need to give it your task and list of requirements but it will generate you something like this next prompt.

### The Importance of Specificity

Now let's look at another example with a better prompt that attempts to solve the issue of generating older code styles.

```markdown
Generate the Vue component requested using the script setup syntax and TypeScript with the Composition API. Include default values using `withDefaults` and define interfaces for the props. Place the script block at the top and use Tailwind CSS for styling. Favor the use of props over slots, and use `ref` over `reactive` in accordance with Vue's latest best practices.

# Steps

1. **Set up the script block** at the top using TypeScript and the script setup syntax.
2. **Define interfaces** for the props and use `withDefaults` to set default values.
3. **Implement the logic** using the Composition API, preferring `ref` over `reactive`.
4. **Design the template** by using props predominantly rather than slots.
5. **Apply Tailwind CSS** for styling within the `<template>` section.

# Output Format

A complete Vue component with the script block at the top, using TypeScript, the Composition API, and styling with Tailwind CSS. Ensure the component layout is clear and follows best practices for readability and maintainability.

# Examples

**Example Input:**
A button component with a label and an emitting event.
- Props: `label` (string), `type` (string, default 'button'), `disabled` (boolean, default false)
- Tailwind classes: 'btn', 'btn-primary'
- Emits: 'click' event

**Example Output:**

```js




Notes

  • Be sure to emit events appropriately using the setup function.
  • Ensure that Tailwind classes are used correctly and consistently.
  • Keep the component logic idiomatic to Vue's best practices, ensuring clear and concise code.

This system prompt has a clear task for the LLM and will steer it in the right direction. It also specifies in text and with examples the style of code the user is looking for. This prompt is also generic enough that it could be added either to the system prompt or at the top of the prompt on each Vue component generation. Now lets take a look at how this prompt performs compared to the first prompt.

Here is the user prompt and the generated component using gpt-4o-08-06-2024:

Create me a reusable button component that has a default, outline, ghost, and link variant along with a color prop, a slot for adding icons, a loading state with a spinner that changes to router-link when you add the to property.

```js
<script setup lang="ts">
import { defineProps, withDefaults, computed, ref } from 'vue';
import { RouterLink } from 'vue-router';

interface ButtonProps {
  variant?: 'default' | 'outline' | 'ghost' | 'link';
  color?: 'primary' | 'secondary' | 'success' | 'danger';
  loading?: boolean;
  to?: string;
}

const props = withDefaults(defineProps<ButtonProps>(), {
  variant: 'default',
  color: 'primary',
  loading: false,
});

const buttonClasses = computed(() => {
  const baseClasses = 'inline-flex items-center justify-center font-medium rounded-md transition';
  const variantClasses = {
    default: `bg-${props.color}-500 text-white hover:bg-${props.color}-600`,
    outline: `border border-${props.color}-500 text-${props.color}-500 hover:bg-${props.color}-100`,
    ghost: `text-${props.color}-500 hover:bg-${props.color}-100`,
    link: `text-${props.color}-500 underline hover:text-${props.color}-600`,
  };
  const loadingClasses = props.loading ? 'opacity-50 cursor-not-allowed' : '';
  return `${baseClasses} ${variantClasses[props.variant]} ${loadingClasses}`;
});
</script>

<template>
  <component
    :is="to ? RouterLink : 'button'"
    :to="to"
    :class="buttonClasses"
    :disabled="loading"
  >
    <slot name="icon" />
    <span v-if="loading" class="loader mr-2"></span>
    <slot />
  </component>
</template>

<style scoped>
.loader {
  border: 2px solid transparent;
  border-top-color: currentColor;
  border-radius: 50%;
  width: 1em;
  height: 1em;
  animation: spin 0.75s linear infinite;
}

@keyframes spin {
  to {
    transform: rotate(360deg);
  }
}
</style>

Notes:

  • Variants: The component supports default, outline, ghost, and link variants.
  • Color: You can specify a color prop to adjust the button's color scheme.
  • Loading State: When loading is true, a spinner is shown, and the button is disabled.
  • Routing: If a to prop is provided, the button becomes a router-link.
  • Icons: Use the default slot to add icons or any other content inside the button.
  • Tailwind CSS: The component uses Tailwind CSS for styling, ensuring a consistent and customizable design.

This code now has the proper code style we wanted and it has more features on the button component. This example shows the importance of specificity in the prompt.

Avoiding Pitfalls

Recognizing When AI Goes Off Track

Remember, LLMs are good at generating code but not so great at troubleshooting it. Often, you’ll find small mistakes—syntax errors or unnecessary wrappers—that can slip through. Tools like linters can catch these issues, and AI models are improving at fixing them as well.

When an LLM starts producing code with excessive if checks or convoluted logic, it might be a sign that it’s taken a wrong turn. This happens because once the model starts down the wrong path, every subsequent prediction is based on that initial mistake.

The Value of Resetting Context

A common tactic I’ve found useful is to refresh the context. If you sense the model has strayed, it’s often best to stop, reset, and begin with a fresh prompt rather than continuing down the wrong path.

Additional Tips and Improvements

Tips for Maximizing AI in Your Workflow

  1. Define Clear Objectives: Before using AI, outline what you aim to achieve. Clear goals lead to better AI assistance.
  2. Iterate on Prompts: Experiment with different prompt structures to find what works best for your needs.
  3. Stay Updated: AI tools evolve rapidly. Keep abreast of the latest developments to leverage new features and improvements.
  4. Combine AI with Human Insight: Use AI as a tool to enhance your skills, not replace them. Always review and understand AI-generated code.
  5. Leverage AI for Learning: Use AI explanations to deepen your understanding of complex topics and coding practices.

Conclusion

By understanding how to effectively prompt AI and knowing the strengths and limitations of LLMs, you can significantly enhance your coding workflow. This structured approach helps you maintain control over the development process while benefiting from AI's capabilities.

In Part 2, we'll explore specific AI tools and how to seamlessly integrate them into various stages of your project. We'll discuss which tools excel at different tasks and how to choose the right one for each phase of development. Stay tuned for practical insights on optimizing your AI-enhanced coding workflow!

Start learning Vue.js for free

David Robertson
David Robertson
David Robertson is a Full-Stack Engineer and AI Integration Specialist focused on integrating AI into business processes and products. He led the AI integration for Kickstart FormKit’s form builder, helping the team create a best-in-class AI-powered solution. David’s journey into AI began with discovering tools like Jasper for copywriting, which inspired him to explore AI’s potential in code. He initially experimented with coding assistants like TabNine and GitHub Copilot, and later expanded his workflow with tools such as ChatGPT, Cursor, and Aider. His passion lies in teaching developers how to effectively use AI in their projects and guiding businesses to seamlessly integrate AI into their products and operations for maximum impact..

Comments

Latest Vue School Articles

7 Beautiful Next-Level Button Components with Vue, VueUse, and TailwindCSS

7 Beautiful Next-Level Button Components with Vue, VueUse, and TailwindCSS

Combine Vue, VueUse, and TailwindCSS for some practical and beautiful buttons
Daniel Kelly
Daniel Kelly
Build Real-World Projects: Vue School&#8217;s Project-Based Learning Paths

Build Real-World Projects: Vue School’s Project-Based Learning Paths

Explore Vue School's project-based learning paths during our Free Weekend! Build real-world applications like a Trello clone and AI post generator while enhancing your skills and expanding your portfolio. Don’t miss out!
Maria Panagiotidou
Maria Panagiotidou

Our goal is to be the number one source of Vue.js knowledge for all skill levels. We offer the knowledge of our industry leaders through awesome video courses for a ridiculously low price.

More than 200.000 users have already joined us. You are welcome too!

Follow us on Social

© All rights reserved. Made with ❤️ by BitterBrains, Inc.