| Port details |
- ollama Run Llama 2, Mistral, and other large language models
- 0.23.1 misc
=4 0.19.0_1Version of this port present on the latest quarterly branch. - Maintainer: yuri@FreeBSD.org
 - Port Added: 2024-08-06 10:06:06
- Last Update: 2026-05-06 15:30:11
- Commit Hash: 309dee4
- People watching this port, also watch:: firefox, pipewire, alpaca, syncthing, drm-61-kmod
- License: MIT
- WWW:
- https://ollama.com
- https://github.com/ollama/ollama
- Description:
- Ollama is a tool that allows you to get up and running with large language
models locally. It provides a simple command-line interface to run and
manage models, as well as a REST API for programmatic access.
Ollama supports a wide range of models available on ollama.com/library,
including popular models like Llama 3, Gemma, and Mistral. It also
allows you to customize models and create your own.
With Ollama, you can:
- Run large language models on your own machine
- Chat with models in the terminal
- Generate text and embeddings
- Customize models with your own prompts and data
- Expose models through a REST API for use in your applications
 ¦ ¦ ¦ ¦ 
- Manual pages:
- FreshPorts has no man page information for this port.
- pkg-plist: as obtained via:
make generate-plist - USE_RC_SUBR (Service Scripts)
-
- Dependency lines:
-
- Conflicts:
- CONFLICTS_BUILD:
- To install the port:
- cd /usr/ports/misc/ollama/ && make install clean
- To add the package, run one of these commands:
- pkg install misc/ollama
- pkg install ollama
NOTE: If this package has multiple flavors (see below), then use one of them instead of the name specified above.- PKGNAME: ollama
- Flavors: there is no flavor information for this port.
- distinfo:
- TIMESTAMP = 1778036619
SHA256 (go/misc_ollama/ollama-v0.23.1/v0.31.2.tar.gz) = bdb9b619f80962dd00c0bffb65e59c53f565c2b550f189a1467f8bc6089401ab
SIZE (go/misc_ollama/ollama-v0.23.1/v0.31.2.tar.gz) = 4251596
Packages (timestamps in pop-ups are UTC):
- Dependencies
- NOTE: FreshPorts displays only information on required and default dependencies. Optional dependencies are not covered.
- Build dependencies:
-
- bash : shells/bash
- miniaudio.h : audio/miniaudio
- json_fwd.hpp : devel/nlohmann-json
- stb_image.h : devel/stb
- patchelf : sysutils/patchelf
- fmt-config.cmake : devel/libfmt
- glslc : graphics/shaderc
- vulkan.h : graphics/vulkan-headers
- cmake : devel/cmake-core
- go126 : lang/go126
- pkgconf>=1.3.0_1 : devel/pkgconf
- Library dependencies:
-
- libopenblas.so : math/openblas
- libvulkan.so : graphics/vulkan-loader
- Fetch dependencies:
-
- go126 : lang/go126
- This port is required by:
- for Run
-
- misc/alpaca
Configuration Options:
- ===> The following configuration options are available for ollama-0.23.1:
====> Options available for the group BACKENDS
CPU=on: Build CPU backend shared libraries for various SIMD instruction sets
VULKAN=on: Build Vulkan GPU backend shared library
MLX=on: Build MLX backend for image generation (CPU)
===> Use 'make config' to modify these settings
- Options name:
- misc_ollama
- USES:
- cmake:indirect go:1.26+,modules localbase pkgconfig zip
- pkg-message:
- For install:
- You installed ollama: the AI model runner.
To run ollama, plese open 2 terminals.
1. In the first terminal, please run:
$ ollama serve
2. In the second terminal, please run:
$ ollama run gemma3
or
$ ollama run mistral
This will download and run the specified AI model.
You will be able to interact with it in plain English.
Please see https://ollama.com/library for the list
of all supported models.
The command "ollama list" lists all models downloaded
into your system.
When the model fails to load into your GPU, please use
the provided ollama-limit-gpu-layers script to create
model flavors with different num_gpu parameters.
ollama uses many gigabytes of disk space in your home directory,
because advanced AI models are often very large.
Please symlink ~/.ollama to a large disk if needed.
Working examples:
(1) Coding with the model gpt-oss:20b:
1. start ollama service with 'sudo service ollama start'
or
setup and start the service 'ollama'
2. install claude-code and run:
ANTHROPIC_BASE_URL=http://localhost:11434 \
ANTHROPIC_AUTH_TOKEN=ollama \
ANTHROPIC_MODEL=gpt-oss:20b \
ANTHROPIC_DEFAULT_SONNET_MODEL=gpt-oss:20b \
ANTHROPIC_DEFAULT_OPUS_MODEL=gpt-oss:20b \
ANTHROPIC_DEFAULT_HAIKU_MODEL=gpt-oss:20b \
claude
3. Ask it to write some program.
(2) Image generation with the model x/z-image-turbo:
1. start ollama service with 'sudo service ollama start'
or
setup and start the service 'ollama'
2. run:
ollama run x/z-image-turbo {textual description of the desired image}
There are also a lot of text-to-text models that you can chat with.
You migh also want to install the 'gollama' package. gollama allows to
run ollama models very easily.
- Master Sites:
|
| Commit History - (may be incomplete: for full details, see links to repositories near top of page) |
| Commit | Credits | Log message |
0.23.1 06 May 2026 15:30:11
    |
Yuri Victorovich (yuri)  |
misc/ollama: update 0.23.0 → 0.23.1 |
0.23.0 04 May 2026 07:56:58
    |
Yuri Victorovich (yuri)  |
misc/ollama: update 0.22.1 → 0.23.0 |
0.22.1 01 May 2026 06:07:48
    |
Yuri Victorovich (yuri)  |
misc/ollama: update 0.21.2 → 0.22.1 |
0.21.2 24 Apr 2026 08:33:35
    |
Yuri Victorovich (yuri)  |
misc/ollama: update 0.21.0 → 0.21.2 |
0.21.0_1 23 Apr 2026 18:42:15
    |
Yuri Victorovich (yuri)  |
misc/ollama: Add missing patch
... that was accidentally dropped during the last port update. |
0.21.0 18 Apr 2026 08:09:52
    |
Yuri Victorovich (yuri)  |
misc/ollama: update 0.20.7 → 0.21.0 |
0.20.7 14 Apr 2026 07:12:40
    |
Yuri Victorovich (yuri)  |
misc/ollama: update 0.20.6 → 0.20.7 |
0.20.6 13 Apr 2026 20:44:30
    |
Yuri Victorovich (yuri)  |
misc/ollama: Update pkg-message |
0.20.6 13 Apr 2026 07:20:14
    |
Yuri Victorovich (yuri)  |
misc/ollama: update 0.20.5 → 0.20.6 |
0.20.5 10 Apr 2026 09:40:54
    |
Yuri Victorovich (yuri)  |
misc/ollama: update 0.20.4 → 0.20.5 |
0.20.4 09 Apr 2026 18:57:00
    |
Yuri Victorovich (yuri)  |
misc/ollama: update 0.20.2 → 0.20.4 |
0.20.2 05 Apr 2026 18:12:05
    |
Yuri Victorovich (yuri)  |
misc/ollama: update 0.20.0 → 0.20.2 |
0.20.0 03 Apr 2026 05:05:00
    |
Yuri Victorovich (yuri)  |
misc/ollama: update 0.19.0 → 0.20.0 |
0.19.0_2 02 Apr 2026 06:45:22
    |
Yuri Victorovich (yuri)  |
misc/ollama: Fix package on non-x86 architectures
x86-specific shared libs are excluded.
Reported by: fallout |
0.19.0_2 31 Mar 2026 04:05:08
    |
Yuri Victorovich (yuri)  |
misc/ollama: Fix HOME env var in service |
0.19.0_1 30 Mar 2026 20:45:05
    |
Yuri Victorovich (yuri)  |
misc/ollama: Fix Vulkan support; Fix home directory value in ollama service |
0.19.0 30 Mar 2026 20:02:04
    |
Yuri Victorovich (yuri)  |
misc/ollama: update 0.18.3 → 0.19.0 |
0.18.3_4 29 Mar 2026 22:57:55
    |
Yuri Victorovich (yuri)  |
misc/ollama: Add config variables ollama_context_length and llama_use_vulkan to
ollama service |
0.18.3_3 29 Mar 2026 19:03:39
    |
Yuri Victorovich (yuri)  |
misc/ollama: Add ollama service; Add BUILD_CONFLICTS |
0.18.3_2 28 Mar 2026 19:29:12
    |
Yuri Victorovich (yuri)  |
misc/ollama: Add to pkg-message |
0.18.3_2 27 Mar 2026 22:58:07
    |
Yuri Victorovich (yuri)  |
misc/ollama: add patches for: image generation hanging/timeout on slow CPUs,
etc.
Enable image gen via MLX - now your local llama can finally draw Beastie
in sunglasses:
$ ollama run x/z-image-turbo "FreeBSD Beastie in sunglasses drinking coffee at
the beach" |
0.18.3_1 27 Mar 2026 08:27:07
    |
Yuri Victorovich (yuri)  |
misc/ollama: fix MLX option (image generation) |
0.18.3 27 Mar 2026 00:59:24
    |
Yuri Victorovich (yuri)  |
misc/ollama: update 0.18.2 → 0.18.3 |
0.18.2_1 22 Mar 2026 19:28:35
    |
Yuri Victorovich (yuri)  |
misc/ollama: Fix options handlingl; Fix MLX option; Make it default |
0.18.2 22 Mar 2026 03:24:01
    |
Yuri Victorovich (yuri)  |
misc/ollama: update 0.18.0 → 0.18.2 |
0.18.0_1 17 Mar 2026 16:11:37
    |
Yuri Victorovich (yuri)  |
misc/ollama: Broken on i386
Reported by: fallout |
0.18.0_1 14 Mar 2026 19:34:24
    |
Yuri Victorovich (yuri)  |
misc/ollama: Add patch |
0.18.0 14 Mar 2026 07:37:32
    |
Yuri Victorovich (yuri)  |
misc/ollama: update 0.17.7 → 0.18.0 |
0.17.7 11 Mar 2026 09:29:45
    |
Yuri Victorovich (yuri)  |
misc/ollama: update 0.15.1 → 0.17.7
PR: 293686 |
0.15.1_3 06 Mar 2026 03:33:23
    |
Adam Weinberger (adamw)  |
various: Bump ports for Go 1.25.8 |
0.15.1_2 11 Feb 2026 19:21:45
    |
Adam Weinberger (adamw)  |
various: Bump ports for Go default 1.24->1.25 |
0.15.1_1 05 Feb 2026 16:48:00
    |
Adam Weinberger (adamw)  |
various: Bump Go ports for 1.24.13 |
0.15.1 26 Jan 2026 02:25:25
    |
Yuri Victorovich (yuri)  |
misc/ollama: update 0.13.5 → 0.15.1 |
0.13.5_1 16 Jan 2026 17:49:03
    |
Adam Weinberger (adamw)  |
various: Bump Go ports for 1.24.12 |
0.13.5 05 Jan 2026 00:19:57
    |
Yuri Victorovich (yuri)  |
misc/ollama: update 0.13.1-rc0 → 0.13.5 |
0.13.1.r0_2 15 Dec 2025 23:06:00
    |
Dag-Erling Smørgrav (des)  |
many: Unpin Go ports
* Ports that were pinned to a deprecated version of Go (1.23 or older)
have been unpinned.
* Ports that were pinned to a still-supported version of Go (1.24 or
newer) have been converted to requesting that as their minimum Go
version.
* Ports that had been forcibly deprecated for pinning an expired Go
version have been undeprecated. |
0.13.1.r0_2 03 Dec 2025 18:24:45
    |
Adam Weinberger (adamw)  |
various: Bump Go ports for 1.24.11 |
0.13.1.r0_1 29 Nov 2025 20:50:01
    |
Yuri Victorovich (yuri)  |
misc/ollama: Add computational backends
Options CPU and VULKAN enable various CPU backends and the VULKAN backend.
CPU backends are for different generations of SIMD instructions.
Backends are loaded automatically when they are installed. |
0.13.1.r0 29 Nov 2025 20:50:00
    |
Yuri Victorovich (yuri)  |
misc/ollama: Remove architecture restriction
Ollama should likely work fine all architectures. |
0.13.1.r0 27 Nov 2025 23:47:44
    |
Yuri Victorovich (yuri)  |
misc/ollama: update 0.3.6 → 0.13.1.r0 |
0.3.6_5 02 Apr 2025 02:07:58
    |
Adam Weinberger (adamw)  |
go: Bump ports for go124 update |
0.3.6_4 05 Mar 2025 16:02:56
    |
Adam Weinberger (adamw)  |
Bump all go ports for yesterday's releases |
0.3.6_3 28 Feb 2025 10:09:27
    |
Yuri Victorovich (yuri)  |
misc/ollama: Update WWW |
0.3.6_3 28 Feb 2025 09:24:22
    |
Yuri Victorovich (yuri)  Author: Yusuf Yaman |
misc/ollama: Fix typos in pkg-message
PR: 285014 |
0.3.6_3 21 Jan 2025 22:21:11
    |
Ashish SHUKLA (ashish)  |
all: Bump after lang/go122 update
PR: 284181
MFH: 2025Q1 |
0.3.6_2 08 Nov 2024 20:58:46
    |
Ashish SHUKLA (ashish)  |
all: Bump after lang/go122 update
PR: 281842 |
0.3.6_1 27 Aug 2024 19:44:05
    |
Yuri Victorovich (yuri)  |
misc/ollama: Remove unnecessary paragraph from pkg-message |
0.3.6_1 27 Aug 2024 17:44:27
    |
Yuri Victorovich (yuri)  |
misc/ollama: Add environment variables to 'ollama start' to work around memory
allocation issues |
0.3.6_1 19 Aug 2024 01:12:09
    |
Yuri Victorovich (yuri)  |
misc/ollama: Improve pkg-message |
0.3.6 18 Aug 2024 20:44:06
    |
Yuri Victorovich (yuri)  |
misc/ollama: update 0.3.4 → 0.3.6 |
0.3.4_4 10 Aug 2024 07:07:35
    |
Yuri Victorovich (yuri)  |
misc/ollama: add CONFLICTS_BUILD |
0.3.4_4 09 Aug 2024 06:24:09
    |
Ashish SHUKLA (ashish)  |
all: Bump after lang/go122 update |
0.3.4_3 09 Aug 2024 05:03:35
    |
Yuri Victorovich (yuri)  |
misc/ollama: Fix Vulkan compatibility |
0.3.4_2 08 Aug 2024 20:01:10
    |
Yuri Victorovich (yuri)  |
misc/ollama: Fix inference; Add ONLY_FOR_ARGHxx lines; Add pkg-message |
0.3.4_1 07 Aug 2024 18:33:34
    |
Yuri Victorovich (yuri)  |
misc/ollama: Add llama-cpp as dependency |
0.3.4 06 Aug 2024 22:32:55
    |
Yuri Victorovich (yuri)  |
misc/ollama: Remove one unnecessary architecture-specific place in scripts |
0.3.4 06 Aug 2024 10:04:44
    |
Yuri Victorovich (yuri)  |
misc/ollama: New port: Run Llama 2, Mistral, and other large language models |