Close Menu
TechUpdateAlert

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Fight AI-Powered Online Scams With Avast AI Assistant

    August 14, 2025

    The Samsung Galaxy S25 just dropped to its lowest price ever, and at just AU$982 it’s the upgrade to make

    August 14, 2025

    Will REMATCH survive much longer without crossplay features?

    August 14, 2025
    Facebook X (Twitter) Instagram
    Trending
    • Fight AI-Powered Online Scams With Avast AI Assistant
    • The Samsung Galaxy S25 just dropped to its lowest price ever, and at just AU$982 it’s the upgrade to make
    • Will REMATCH survive much longer without crossplay features?
    • Starlink Mini users just lost their beloved pause feature
    • Burn More Calories With These 4 High-Impact Treadmill Workouts
    • Today’s NYT Mini Crossword Answers for Aug. 14
    • Stuff Your Kindle Day is Here: Grab Free and $1 Historical Fiction E-Books
    • Still on Windows 11 23H2? Act soon! Your update deadline is almost here
    Facebook X (Twitter) Instagram Pinterest Vimeo
    TechUpdateAlertTechUpdateAlert
    • Home
    • Gaming
    • Laptops
    • Mobile
    • Software
    • Reviews
    • AI & Tech
    • Gadgets
    • How-To
    TechUpdateAlert
    Home»Reviews»How My 7-Year-Old Laptop Successfully Runs A Local AI LLM
    Reviews

    How My 7-Year-Old Laptop Successfully Runs A Local AI LLM

    techupdateadminBy techupdateadminAugust 13, 2025No Comments5 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    A Huawei MateBook D running Ollama on Fedora 42
    Share
    Facebook Twitter LinkedIn Pinterest Email

    We’re led to believe that running AI locally on a PC needs some kind of beefed-up hardware. That’s partly true, but as with gaming, it’s a sliding scale. You can play many of the same games on a Steam Deck as you can on a PC with an RTX 5090 inside. The experience is not the same, but what matters is that you can play.

    That’s also true of dabbling with local AI tools, such as running LLMs using something like Ollama. While a beefcake GPU with lashings of VRAM is ideal, it’s not absolutely essential.

    Case in point: my seven-year-old Huawei MateBook D with a now fairly underpowered AMD Ryzen 5 2500U, 8GB of RAM, and no dedicated graphics. But it can still use Ollama, it can still load up some LLMs, and I would say it’s usable.


    You may like

    There are caveats to running AI on older hardware

    There is a selection of 1b models on Ollama that you can run even on older hardware. (Image credit: Windows Central)

    Gaming is the perfect comparison use case that comes to mind for AI right now. To get the most from the latest, most demanding content, you need some serious hardware. But you can also enjoy many of the latest titles on older and lower-powered machines, even those that only rely on integrated graphics.

    The caveat is that the older, less powerful hardware simply won’t perform as well. You’re probably looking to hit 30 FPS instead of (at least) 144 FPS, but you can do it. But you’ll have to sacrifice graphics settings, ray tracing, and resolution.

    The same is true of AI. You’re not going to be churning out hundreds of tokens per second, nor will you be loading up the latest, biggest models.

    But there are plenty of smaller models you can absolutely try out, as I have, successfully on older hardware. If you have a GPU that’s compatible, great, it’ll use that. I don’t, however, and I’ve still had some success.

    All the latest news, reviews, and guides for Windows and Xbox diehards.

    Specifically, I’ve loaded up some ‘1b’ models, that is, 1 billion parameters, into Ollama on my old laptop, which is currently running Fedora 42. The APU isn’t officially supported by Windows 11 anyway, but I usually run Linux on older hardware regardless.

    Ollama is platform-agnostic, though, with versions for Mac and Windows alongside Linux. So it doesn’t matter what you’re using; even an older Mac may get some mileage with this.

    So, just how ‘usable’ is it?

    Ollama performance stats running on a Ryzen 5 2500U on battery power.

    It’s not lightning fast or anything on battery power, but it works, and for shorter exchanges it’s perfectly usable. (Image credit: Windows Central)

    I haven’t tried any larger than 1b models on this laptop, and I don’t think it’s worth the time. But testing out three such models, gemma3:1b, llama3.2:1b, and deepseek-r1:1.5b, all yield similar performance. In each case, the LLMs are using a 4k context length, and I don’t think I’d risk trying anything higher.

    First up is my old favorite:

    “How much wood would a woodchuck chuck if a woodchuck could chuck wood?”

    Both Gemma 3 and Llama 3.2 churned out a short, fairly quick response and recorded just under 10 tokens per second. Deepseek r1, by comparison, has reasoning, so it’ll run through its thought process first, then give you an answer, and was a little behind at just under 8 tokens per second.

    But, while not what you’d call fast, it’s usable. All three are still churning out responses significantly faster than I could type (and definitely faster than I can think). These were all with a 4k context length.

    Ollama on Linux being asked to tell me a story

    Tell me a story… (Image credit: Windows Central)

    The second test was a bit meatier. I asked all three models to generate a simple PowerShell script to fetch the raw content of text files from a GitHub repository, and to ask questions to help make sure I was happy with the response, to build the best possible script.

    Note, I haven’t actually validated whether the output worked. In this instance, I’m purely interested in how well (and fast) the models are able to work the problem.

    Gemma 3 gave an extremely detailed output explaining each part of the script, asking questions as directed to tailor the script, and did it all at just under 9 tokens per second. DeepSeek r1 with reasoning operated a little slower, again, at 7.5 tokens per second, but did not ask questions. Llama 3.2 gave an output of similar quality to Gemma 3 at a rate of just under 9 tokens per second.

    Oh, but I didn’t mention yet, this was all on battery power with a balanced power plan. When connected to external power, all three models essentially doubled their tokens per second and took about half as long to complete the task.

    I think that’s the more interesting point, here. You could be out and about with a laptop on battery and still getting stuff done. At home or in the office, hooked up to power, even older hardware can be fairly capable.

    This was all done purely using the CPU and RAM, too. The laptop harvests a couple of GB of RAM for the iGPU to use, but even so, it’s not supported by Ollama. Doing a quick ollama ps shows 100% on the CPU.

    OpenAI’s nightmare: Deepseek R1 on a Raspberry Pi – YouTube
    OpenAI's nightmare: Deepseek R1 on a Raspberry Pi - YouTube


    Watch On

    These are small LLMs, but the truth is that you can play about with them, integrate them into your workflow, learn some skills, all without breaking the bank on new, crazy powerful hardware.

    You don’t have to look too far on YouTube to find creators running AI on a Raspberry Pi and home servers made up of older and (now) cheaper hardware. Even with an old, mid-tier laptop, you can probably at least get started.

    It’s not ChatGPT, but it’s something. Even an old PC can be an AI PC.

    7YearOld laptop LLM local runs Successfully
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleSam Altman’s new startup wants to merge machines and humans
    Next Article I’ll never buy a laptop that doesn’t have biometric hardware
    techupdateadmin
    • Website

    Related Posts

    Reviews

    Save $100 on Surface Pro 12-inch

    August 14, 2025
    Reviews

    Get this Ryzen 7 mini PC with 32GB RAM for a new low price: $339

    August 14, 2025
    Reviews

    The Fairphone (Gen. 6) Review: Better Than Ever

    August 14, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Apple Pencil With ‘Trackball’ Tip, Ability to Draw on Any Surface Described in Patent Document

    July 9, 20253 Views

    Samsung Galaxy Z Fold 7 and Galaxy Z Flip 7: First Impressions

    July 9, 20253 Views

    The Bezos-funded climate satellite is lost in space

    July 9, 20252 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    Best Fitbit fitness trackers and watches in 2025

    July 9, 20250 Views

    There are still 200+ Prime Day 2025 deals you can get

    July 9, 20250 Views

    The best earbuds we’ve tested for 2025

    July 9, 20250 Views
    Our Picks

    Fight AI-Powered Online Scams With Avast AI Assistant

    August 14, 2025

    The Samsung Galaxy S25 just dropped to its lowest price ever, and at just AU$982 it’s the upgrade to make

    August 14, 2025

    Will REMATCH survive much longer without crossplay features?

    August 14, 2025

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact us
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    © 2025 techupdatealert. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.