Close Menu
TechUpdateAlert

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Anker Says Its New Portable Power Station Can Charge to 100% in Just 49 Minutes

    August 19, 2025

    I tested Datadog network monitoring and found it amazing for analytics and integrations review

    August 19, 2025

    I’m loving the new Gmail and Calendar access in ChatGPT Plus – here’s how to use it

    August 19, 2025
    Facebook X (Twitter) Instagram
    Trending
    • Anker Says Its New Portable Power Station Can Charge to 100% in Just 49 Minutes
    • I tested Datadog network monitoring and found it amazing for analytics and integrations review
    • I’m loving the new Gmail and Calendar access in ChatGPT Plus – here’s how to use it
    • The Best Ergonomic Mouse (2025), Tested and Reviewed
    • How to Keep These 3 Settings From Killing Your iPhone Battery
    • PlayStation Plus Subscribers Can Play Marvel’s Spider-Man and More Now
    • Silicon Valley Is Panicking About Zohran Mamdani. NYC’s Tech Scene Is Not
    • Ethernet vs. Wi-Fi: One Crushed the Speed Test, the Other Barely Kept Up
    Facebook X (Twitter) Instagram Pinterest Vimeo
    TechUpdateAlertTechUpdateAlert
    • Home
    • Gaming
    • Laptops
    • Mobile
    • Software
    • Reviews
    • AI & Tech
    • Gadgets
    • How-To
    TechUpdateAlert
    Home»Reviews»Why you need to use LM Studio not Ollama for AI on mini PCs
    Reviews

    Why you need to use LM Studio not Ollama for AI on mini PCs

    techupdateadminBy techupdateadminAugust 19, 2025No Comments5 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Why you need to use LM Studio not Ollama for AI on mini PCs
    Share
    Facebook Twitter LinkedIn Pinterest Email

    I’ve been playing around a lot more recently with LLMs running locally on my PC, and for the most part, that’s been with Ollama.

    Ollama is a fantastic tool that makes it extremely simple to download and run LLMs on your own PC, made even easier with the launch of its new GUI application. It can hook into your workflow beautifully, but it has one glaring flaw.

    If you don’t have a dedicated GPU, its performance isn’t that good. I’ve been using it with an RTX 5080 (and soon to try an RTX 5090), and it flies along, but on something like the new Geekom A9 Max mini PC, which just landed for review, it’s a very different story.


    You may like

    Ollama does not officially support integrated graphics on Windows. Out of the box, it’ll default to using the CPU, and honestly, there are times when I’m not interested in looking for workarounds.

    Instead, enter LM Studio, which makes it so simple to leverage integrated GPUs, even I can do it in seconds. That’s what I want, that’s what you should want, and it’s why you need to ditch Ollama if you want to leverage your integrated GPU.

    What is LM Studio?

    The performance here is excellent for a fairly hefty model using what is a mobile AMD APU. (Image credit: Windows Central)

    Without going too far into the weeds, LM Studio is another app you can use on your Windows PC to download LLMs and play about with them. It does things a little differently from Ollama, but the end result is the same.

    It also has a significantly more advanced GUI than Ollama’s official app, which is another tick towards using it. To get the most from Ollama outside of the terminal, you need to use something third-party, such as OpenWebUI or the Page Assist browser extension.

    All the latest news, reviews, and guides for Windows and Xbox diehards.

    It’s ultimately a one-stop shop to find and install models, and then interact with them through a familiar-feeling chatbot interface. There are many advanced features you can play with, but for now, we’ll leave it simple.

    The big win is that LM Studio supports Vulkan, which means you can offload models on both AMD and Intel to the integrated GPU for compute. That’s a big deal, because I’m yet to see in my own testing a situation where using the GPU wasn’t faster than the CPU.

    So, how do you use an integrated GPU in LM Studio?

    Offloading gpt-oss:20b to an integrated AMD GPU in LM Studio

    To make sure you’re utilizing your iGPU is as simple as changing a value on this slider. (Image credit: Windows Central)

    The other beauty of just going the LM Studio route is that there’s nothing fancy or technical that needs to be done in order to use your iGPU with an LLM.

    To use a model, you simply load it up with the dropdown box at the top. When you select the one you want, a bunch of settings will appear. In this instance, we’re only really interested in the GPU offload one.

    It’s a sliding scale and works in layers. Layers are essentially the blocks that make up the LLM, and instructions will pass down them one by one, until the final layer produces the response. It’s up to you how many you offload, but anything below all of them means your CPU will be picking up some of the slack.

    Once you’re happy, click load model, and it’ll load into memory and operate within your designated parameters.

    Windows Central Senior Editor Ben Wilson holding a Geekom A8 Max mini PC with a red background

    Using LM Studio you, too, can be this happy using a mini PC for local AI! (Image credit: Ben Wilson | Windows Central)

    As an example, on the aforementioned Geekom A9 Max, I have Radeon 890M integrated graphics, and I’m wanting to use basically all of it. When I’m using AI, I’m not gaming, so I want all of the GPU to focus on the LLM. I set 16GB of my 32GB total system memory as reserved for the GPU to load the model into, and get to work.

    With a model like gpt-oss:20b I can load the entire thing into that dedicated GPU memory, use the GPU for compute, leave the rest of the system memory and the CPU well alone and get around 25 tokens per second.

    Is it as fast as my desktop PC with an RTX 5080 inside? Not at all. Is it faster than using the CPU? Absolutely. With the added benefit that it isn’t using almost all of my available CPU resources, resources that other software on the PC might want to use. The GPU is basically dormant most of the time; why wouldn’t you want to use it?

    I could probably get even more performance if I spent the time getting into the weeds, but that’s not the focus here. The focus is on LM Studio, and how, if you don’t have a dedicated GPU, it’s hands-down the tool to use for local LLMs. Just install it and everything works.

    Mini Ollama PCs studio
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleThe Good Boy trailer is the hardest watch of 2025, and the new horror movie hasn’t even been released yet
    Next Article HMD Vibe2 listed ahead of launch
    techupdateadmin
    • Website

    Related Posts

    Reviews

    I tested Datadog network monitoring and found it amazing for analytics and integrations review

    August 19, 2025
    Reviews

    Today’s Arm PCs can’t compete with $599 MacBooks. We deserve better

    August 19, 2025
    Reviews

    NVIDIA AI game assistant hits more RTX GPUs

    August 19, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Apple Pencil With ‘Trackball’ Tip, Ability to Draw on Any Surface Described in Patent Document

    July 9, 20253 Views

    Samsung Galaxy Z Fold 7 and Galaxy Z Flip 7: First Impressions

    July 9, 20253 Views

    The Bezos-funded climate satellite is lost in space

    July 9, 20252 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    Best Fitbit fitness trackers and watches in 2025

    July 9, 20250 Views

    There are still 200+ Prime Day 2025 deals you can get

    July 9, 20250 Views

    The best earbuds we’ve tested for 2025

    July 9, 20250 Views
    Our Picks

    Anker Says Its New Portable Power Station Can Charge to 100% in Just 49 Minutes

    August 19, 2025

    I tested Datadog network monitoring and found it amazing for analytics and integrations review

    August 19, 2025

    I’m loving the new Gmail and Calendar access in ChatGPT Plus – here’s how to use it

    August 19, 2025

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact us
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    © 2025 techupdatealert. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.