When the AI lives on your devices: Local Models & the future of creative freedom

    Hey everyone, this is Henry!
Image: Bloomberg

    We’re living in the era of AI-everything - from ChatGPT for daily works, Claude writing essays to Midjourney painting dreamscapes. However, while everyone is busy in the cloud, there’s a quiet revolution happening right on our own machines. Local AI models — yeah, the ones that you can download and run on your own computer or laptop — are opening up a whole new world for creators. No subscriptions, no internet, no censorship. Just raw, customizable, always-there brainpower that bends to your vibe. For indie creators, writers, digital artists, and developers, this isn’t just about being techy — it’s about reclaiming control. And maybe, just maybe, it’s the next step in creating with AI, not just using it.

Always ready

    What is the most difference between a cloud and a local version? Obviously, the local models will always be ready when you need, as long as your computer still has battery. The locals will be installed right on your PC or laptop, so it will run by your CPU and GPU. You just need a mid laptop and already can run a light one. With a local model sitting on your laptop or PC, you can write stories, generate art, brainstorm ideas, or even code apps without needing an internet connection.

Image: Nils Steyaert

    And what is the best part? Imagine you're using the o4 model but then it says you're out of turn and need to wait until tomorrow to use the features again. With the local AIs, you can make it work for you for hours and it still fines. No subscription fee every month just to have unlimited access to basic features like image generation, voiceover creation, or text expansion.

    Although, it seems great, but I need to say, you won't be able to run ChatGPT's o4 model on your PC. The first reason is it is too heavy to run, and second is that it is not open-sourced. However, there're tons of other models available online like LLaMA 2, Mistra, or Gemma. They might not be as smart as GPT-4 right out of the box, but they’re getting close. And with fine-tuning, some of them are beasts. You can even tweak them to match your writing tone, generate art in your style, or behave like your favorite OC - seriously.

Specs

    However, each local model has its own capacity. For example, LLaMA 3.2 3B is 3.42GB, but Gemma 3 12B is 7.33GB, while Mistra 7B only requires 4.34GB. For now, you might question what are those "B"s, does it matter? Yes, it matters a lots. The “B” stands for billions of parameters - basically the number of little brain connections inside the model. More parameters = more power and smarter (usually); but also = more RAM and storage needed to run it. So, depends on your needs and the specs of your computer, you need to choose a model which fits in. For example, my laptop is a gaming one, with the CPU is Intel Core i7-11800H and the GPU is RTX 3050ti, but it only has 8GB of RAM installed; so, a 3B LLaMA model is okay for my laptop, but if I install the 7B, it will be slowed down because it's quite heavy for my device. Here's is a breakdown:

Model Size What It Means Best For RAM Needed
3B–4B Lightweight, snappy 🏃 Note-taking, casual chatting, summarizing ~4–6 GB
7B Solid all-rounder 💪 Writing, coding, brainstorming ~8–12 GB
13B+ Big brain energy 🧠💥 Deep convos, advanced logic, character roleplay 16–32 GB+

    So yeah — even if you don’t have a gaming PC, you can still run smaller models easily. And they’ve gotten really smart lately. Plus, you can use quantized versions (basically compressed models) that run even smoother without burning your RAM alive 🔥.

How to install?

    Actually, you will need another program installed to manage and use the models locally. There're a few options currently, one of the most popular ones is Ollama. However, most of the users use it in the command window, which looks boring. As I knew, most of them are the programmers who want to do something serious with the local models. But what if you're just a "normal" person who want to use it like ChatGPT? I found another app called "LM Studio", it has a friendly user interface which you can have multiple conversations with the models, you can also install the models directly from the app instead of going to the websites to install them.

This is what looks like when using the local AI in command form (Image: DhanushKumar) 

Here is the UI of LM Studio

    We’re still early in the local AI game, but the momentum is real. What started as a tool for a few tech-savvy folks is now turning into a playground for artists🎨, coders💻, and indie creators👀. With the right setup, a bit of curiosity, and a willingness to experiment, anyone can unlock this creative beast right from their desk. No cloud dependency, no strings attached.

    So whether you're sketching a new character, writing lore for your next story, or just vibing with a custom chatbot trained on your journal entries, local AI is here to give you full creative freedom - your way, your pace, your rules (I'm sounding like advertising, am I?)

    Whatever, for me, this isn’t just the future of AI creation. It’s the beginning of something more personal, powerful, and real 🚀✨

Comments

Popular posts from this blog

Against the Odds: Pursuing my dream in AI & Robotics

Tiana experience at the AI Ethics Competition

Exploring the ban of TikTok in the US