Port details on branch 2025Q1 |
- ollama Run Llama 2, Mistral, and other large language models
- 0.3.6_3 misc
=0 0.3.6_3Version of this port present on the latest quarterly branch. - Maintainer: yuri@FreeBSD.org
 - Port Added: 2025-01-21 23:30:14
- Last Update: 2025-01-21 22:52:53
- Commit Hash: 3a5f172
- License: MIT
- WWW:
- https://ollama.com/
- Description:
- Ollama allows to get up and running with large language models.
Ollama supports a list of models available on ollama.com/library.
¦ ¦ ¦ ¦ 
- Manual pages:
- FreshPorts has no man page information for this port.
- pkg-plist: as obtained via:
make generate-plist - Dependency lines:
-
- Conflicts:
- CONFLICTS_BUILD:
- To install the port:
- cd /usr/ports/misc/ollama/ && make install clean
- To add the package, run one of these commands:
- pkg install misc/ollama
- pkg install ollama
NOTE: If this package has multiple flavors (see below), then use one of them instead of the name specified above.- PKGNAME: ollama
- Flavors: there is no flavor information for this port.
- ONLY_FOR_ARCHS: amd64
- distinfo:
- TIMESTAMP = 1724010094
SHA256 (go/misc_ollama/ollama-v0.3.6/v0.3.6.mod) = 16c078d8f0b29f84598fb04e3979acf86da41eb41bf4ff8363548e490f38b54e
SIZE (go/misc_ollama/ollama-v0.3.6/v0.3.6.mod) = 2992
Packages (timestamps in pop-ups are UTC):
- Dependencies
- NOTE: FreshPorts displays only information on required and default dependencies. Optional dependencies are not covered.
- Build dependencies:
-
- bash : shells/bash
- cmake : devel/cmake-core
- glslc : graphics/shaderc
- vulkan-headers>0 : graphics/vulkan-headers
- go122 : lang/go122
- pkgconf>=1.3.0_1 : devel/pkgconf
- Library dependencies:
-
- libvulkan.so : graphics/vulkan-loader
- Fetch dependencies:
-
- go122 : lang/go122
- ca_root_nss>0 : security/ca_root_nss
- There are no ports dependent upon this port
Configuration Options:
- No options to configure
- Options name:
- misc_ollama
- USES:
- go:1.22,modules pkgconfig zip
- pkg-message:
- For install:
- You installed ollama: the AI model runner.
To run ollama, plese open 2 terminals.
1. In the first terminal, please run:
$ OLLAMA_NUM_PARALLEL=1 OLLAMA_DEBUG=1 LLAMA_DEBUG=1 ollama start
2. In the second terminal, please run:
$ ollama run mistral
This will download and run the AI model "mistral".
You will be able to interact with it in plain English.
Please see https://ollama.com/library for the list
of all supported models.
The command "ollama list" lists all models downloaded
into your system.
When the model fails to load into your GPU, please use
the provided ollama-limit-gpu-layers script to create
model flavors with different num_gpu parameters.
ollama uses many gigbytes of disk space in your home directory,
because advanced AI models are often very large.
Pease symlink ~/.ollama to a large disk if needed.
- Master Sites:
|