Close Menu
TechUpdateAlert

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    My Health Anxiety Means I Won’t Use Apple’s or Samsung’s Smartwatches. Here’s Why

    December 22, 2025

    You can now buy the OnePlus 15 in the US and score free earbuds if you hurry

    December 22, 2025

    Today’s NYT Connections: Sports Edition Hints, Answers for Dec. 22 #455

    December 22, 2025
    Facebook X (Twitter) Instagram
    Trending
    • My Health Anxiety Means I Won’t Use Apple’s or Samsung’s Smartwatches. Here’s Why
    • You can now buy the OnePlus 15 in the US and score free earbuds if you hurry
    • Today’s NYT Connections: Sports Edition Hints, Answers for Dec. 22 #455
    • Android might finally stop making you tap twice for Wi-Fi
    • Today’s NYT Mini Crossword Answers for Dec. 22
    • Waymo’s robotaxis didn’t know what to do when a city’s traffic lights failed
    • Today’s NYT Wordle Hints, Answer and Help for Dec. 22 #1647
    • You Asked: OLED Sunlight, VHS on 4K TVs, and HDMI Control Issues
    Facebook X (Twitter) Instagram Pinterest Vimeo
    TechUpdateAlertTechUpdateAlert
    • Home
    • Gaming
    • Laptops
    • Mobile
    • Software
    • Reviews
    • AI & Tech
    • Gadgets
    • How-To
    TechUpdateAlert
    Home»Reviews»Ollama AI on WSL runs just as well as native on Windows 11
    Reviews

    Ollama AI on WSL runs just as well as native on Windows 11

    techupdateadminBy techupdateadminSeptember 3, 2025No Comments6 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Ollama running the Gemma3:12b model on a Razer Blade 18.
    Share
    Facebook Twitter LinkedIn Pinterest Email

    If you’re looking at using Ollama on your PC to run local LLMs (Large Language Models), with Windows PCs at least, you have two options. The first is to just use the Windows app and run it natively. The second is to run the Linux version through WSL.

    The first is definitely easier. For one, you don’t need to have WSL installed, and the actual process of getting up and running is simpler. You just downloaded the Windows installer, run it, and you’re up and running.

    Installing Ollama on WSL requires jumping through a few more hoops. But, using Ubuntu, it’s good, and performance is excellent. At least in my experience using an NVIDIA GPU. But unless you’re, say, a developer, using WSL for your workflows, there isn’t much reason to go this way over the regular Windows version.


    You may like

    Getting Ollama set up on WSL

    With an NVIDIA GPU and the CUDA toolkit, you can leverage all of that power for Ollama running inside WSL. (Image credit: Windows Central | Ben Wilson)

    I’ll start by saying this isn’t an exhaustive setup guide, more a case of pointing in the right direction. To use Ollama on WSL — and specifically, I’m referring to Ubuntu, because that seems to be both the easiest and best documented — there are a couple of prerequisites. This post is also specific to NVIDIA GPUs.

    The first is an up-to-date NVIDIA driver for Windows. The second is a WSL-specific CUDA toolkit. Assuming you have both, the magic will just happen. Microsoft and NVIDIA’s documentation is the best place to start to guide you through the whole process.

    It doesn’t take too long, though, but that’s also dependent on your internet connection to get all the bits you need downloaded.

    Once you have this handled, you can simply run the installation script to get Ollama up and running. Note, I haven’t explored running Ollama in a container on WSL; my experience is strictly linked to just installing it directly to Ubuntu.

    All the latest news, reviews, and guides for Windows and Xbox diehards.

    During the installation process, it should automatically detect the NVIDIA GPU if you have everything set up correctly. You should see a message saying “NVIDIA GPU installed” as the script is running.

    From there on out, it’s the same as using Ollama on Windows, minus the GUI application. Download your first model and you’re away. There is a mild quirk, though, if you switch back to Windows.

    If you’re running WSL in a different tab, Ollama in Windows (in the terminal at least) will only recognize the models you have installed on WSL as active. If you check the ollama –list command, you’ll not see any you have installed on Windows. If you try to run one you know you have, it’ll go out and start downloading it again.

    In this case, you need to ensure WSL is properly shut down before using Ollama on Windows. You can do this by entering wsl –shutdown into a PowerShell terminal.

    Almost identical performance in WSL to using Ollama on Windows

    Ollama running gpt-oss:20b on Windows inside Windows Terminal displaying its performance metrics.

    gpt-oss:20b is fast whether you run it on Windows or WSL. (Image credit: Windows Central)

    I’ll get to some numbers in a moment, but there is one point to address. It potentially doesn’t matter in the grand scheme of things, but you need to at least be mindful that just running WSL will use up some of your overall system resources.

    You can set the amount of RAM and CPU threads you want WSL to use easily in the WSL settings app. If you’re primarily going to be using the GPU, it’s less important. But, if you intend to use any models that don’t fit entirely into your VRAM, you’ll need to ensure you have sufficient resources allocated to WSL to pick up the slack.

    Remember that if the model doesn’t fit into the VRAM, Ollama will rope in the regular system memory, and with it, the CPU. Just be sure to allocate accordingly.

    I’ll admit these tests are very simple, and I’m not verifying any accuracy of the output. It’s only to illustrate the comparable performance. I looked at four models that all run comfortably on an RTX 5090: deepseek-r1:14b, gpt-oss:20b, magistral:24b, and gemma3:27b.

    In each case, I asked the models two questions.

    • Write a story over 5 chapters with a theme and characters of your choice. (Story)
    • I want you to create a clone of Pong entirely within Python. Use no external assets, and any graphical elements must be created within the code. Ensure any dependencies are imported that are required. (Code)

    And the results:

    Swipe to scroll horizontally
    Row 0 – Cell 0

    WSL

    Windows 11

    gpt-oss:20b

    Story: 176 tokens/sec

    Code: 177 tokens/sec

    Story: 176 tokens/sec

    Code: 181 tokens/sec

    magistral:24b

    Story: 78 tokens/sec

    Code: 77 tokens/sec

    Story: 79 tokens/sec

    Code: 73 tokens/sec

    deepseek-r1:14b

    Story: 98 tokens/sec

    Code: 98 tokens/sec

    Story: 101 tokens/sec

    Code: 102 tokens/sec

    gemma3:27b

    Story: 58 tokens/sec

    Code: 57 tokens/sec

    Story: 58 tokens/sec

    Code: 58 tokens/sec

    There are some minor fluctuations, but performance is as near as makes no difference, identical.

    The only difference between the impact each made on the system resources, also, is the additional RAM being used when WSL is active. But since none of these models exceeded the dedicated VRAM, it had no impact on the model’s performance.

    For developers working in WSL, Ollama is just as powerful

    Ollama running the Gemma3:12b model on a Razer Blade 18.

    Windows or WSL, performance in Ollama is pretty darn good. (Image credit: Windows Central)

    An Average Joe such as myself (even one who loves WSL) doesn’t really need to bother with using Ollama this way. My main use of Ollama at the moment is education, both as a learning tool and teaching myself about how it works.

    For that, using it on Windows 11 is absolutely fine, either in the terminal or hooked into the Page Assist browser extension, which is something else I’ve been playing with recently.

    But WSL is a bridge between Windows and Linux for developers. Those with necessary WSL workflows can use Ollama this way without any loss in performance.

    Even today, it still feels a little bit like magic that you can run Linux on Windows like this and have full use of your NVIDIA GPU. It certainly all works that way, anyway.

    Native Ollama runs Windows WSL
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleWhy I love my Kenmore canister vacuum
    Next Article ‘South Park’ Season 27: When Does the Next Episode Come Out?
    techupdateadmin
    • Website

    Related Posts

    Gadgets

    Your next Legion Go 2 might run SteamOS instead of Windows 11

    December 21, 2025
    Gadgets

    Google Lens is becoming part of Chrome’s native AI interface

    December 15, 2025
    Gadgets

    Microsoft makes theming your Windows 11 PC as easy as phones, but not as much fun

    December 15, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    NYT Strands hints and answers for Monday, August 11 (game #526)

    August 11, 202545 Views

    These 2 Cities Are Pushing Back on Data Centers. Here’s What They’re Worried About

    September 13, 202542 Views

    Today’s NYT Connections: Sports Edition Hints, Answers for Sept. 4 #346

    September 4, 202540 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    Best Fitbit fitness trackers and watches in 2025

    July 9, 20250 Views

    There are still 200+ Prime Day 2025 deals you can get

    July 9, 20250 Views

    The best earbuds we’ve tested for 2025

    July 9, 20250 Views
    Our Picks

    My Health Anxiety Means I Won’t Use Apple’s or Samsung’s Smartwatches. Here’s Why

    December 22, 2025

    You can now buy the OnePlus 15 in the US and score free earbuds if you hurry

    December 22, 2025

    Today’s NYT Connections: Sports Edition Hints, Answers for Dec. 22 #455

    December 22, 2025

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact us
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    © 2026 techupdatealert. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.