ollama_dart 1.4.1 copy "ollama_dart: ^1.4.1" to clipboard
ollama_dart: ^1.4.1 copied to clipboard

Dart client for the Ollama API to run LLMs locally (OpenAI gpt-oss, DeepSeek-R1, Gemma 3, Llama 4, and more).

1.4.1 #

Fixed verification warnings in generated model classes.

1.4.0 #

This release adds inline streaming error detection for improved reliability when handling streamed responses.

  • FEAT: Detect inline streaming errors (#91). (9f0eaf37)
  • DOCS: Improve READMEs with badges, sponsor section, and vertex_ai deprecation (#90). (5741f2f3)

1.3.0 #

Added missing fields to chat, completion, and model response models.

  • FEAT: Add missing fields to response models (#80). (73e44241)

1.2.1 #

Internal improvements to build tooling and package publishing configuration.

  • REFACTOR: Migrate API skills to the shared api-toolkit CLI (#74). (923cc83e)
  • CHORE: Add .pubignore to exclude .agents/ and specs/ from publishing (#78). (0ff199bf)

1.2.0 #

Added baseUrl and defaultHeaders parameters to withApiKey constructors, fixed hashCode for list fields, and unified equality helpers.

  • FEAT: Add baseUrl and defaultHeaders to withApiKey constructors (#57). (f0dd0caa)
  • FIX: Use Object.hashAll() for list fields in hashCode (#65). (4b19abd9)
  • REFACTOR: Unify equality_helpers.dart across packages (#67). (ec2897f8)

1.1.0 #

Added withApiKey convenience constructor for simplified client initialization.

  • FEAT: Add withApiKey convenience constructors (#56). (b06e3df3)
  • CHORE: Bump googleapis from 15.0.0 to 16.0.0 and Dart SDK to 3.9.0 (#52). (eae130b7)
  • CI: Add GitHub Actions test workflow (#50). (6c5f079a)

1.0.0 #

Note: This release has breaking changes.

TL;DR: Complete reimplementation with a new architecture, minimal dependencies, resource-based API, and improved developer experience. Hand-crafted models (no code generation), interceptor-driven architecture, comprehensive error handling, and full Ollama API coverage.

What's new #

  • Resource-based API organization:
    • client.chat — Chat completions (multi-turn conversations)
    • client.completions — Text generation (single-turn)
    • client.embeddings — Generate text embeddings
    • client.models — Model management (list, show, pull, push, copy, delete, create, ps)
    • client.version — Server version info
  • Architecture:
    • Interceptor chain (Auth → Logging → Error → Transport with Retry wrapper).
    • Authentication: Bearer token or custom via AuthProvider interface.
    • Retry with exponential backoff + jitter (only for idempotent methods on 429, 5xx, timeouts).
    • Abortable requests via abortTrigger parameter.
    • NDJSON streaming parser for real-time responses.
    • Central OllamaConfig (timeouts, retry policy, log level, baseUrl, auth).
  • Hand-crafted models:
    • No code generation dependencies (no freezed, json_serializable).
    • Minimal runtime dependencies (http, logging only).
    • Immutable models with copyWith using sentinel pattern.
    • Full type safety with sealed exception hierarchy.
  • Type-safe enums and sealed classes:
    • DoneReason enum for completion stop reasons (stop, length, load, unload).
    • ThinkValue sealed class for thinking mode (ThinkEnabled, ThinkWithLevel).
    • ResponseFormat sealed class for format options (JsonFormat, SchemaFormat).
    • MessageRole enum for message roles (system, user, assistant, tool).
  • Improved DX:
    • Simplified model names (e.g., ChatRequest instead of GenerateChatCompletionRequest).
    • Named constructors for common patterns (e.g., ChatMessage.user(), ChatMessage.system()).
    • Explicit streaming methods (createStream() vs create()).
    • Rich logging with field redaction for sensitive data.
  • Full API coverage:
    • Chat completions with tool calling support.
    • Text completions with thinking mode.
    • Embeddings generation.
    • Model management (list, show, create, copy, delete, pull, push, ps).
    • Server version info.

Breaking Changes #

  • Resource-based API: Methods reorganized under strongly-typed resources:
    • client.generateChatCompletion()client.chat.create()
    • client.generateChatCompletionStream()client.chat.createStream()
    • client.generateCompletion()client.completions.generate()
    • client.generateCompletionStream()client.completions.generateStream()
    • client.generateEmbedding()client.embeddings.create()
    • client.listModels()client.models.list()
    • client.showModelInfo()client.models.show()
    • client.listRunningModels()client.models.ps()
    • client.getVersion()client.version.get()
  • Model class renames:
    • GenerateChatCompletionRequestChatRequest
    • GenerateChatCompletionResponseChatResponse
    • GenerateCompletionRequestGenerateRequest
    • GenerateCompletionResponseGenerateResponse
    • GenerateEmbeddingRequestEmbedRequest
    • GenerateEmbeddingResponseEmbedResponse
    • MessageChatMessage
    • ToolToolDefinition
    • RequestOptionsModelOptions
  • Configuration: New OllamaConfig with AuthProvider pattern.
  • Exceptions: Replaced OllamaClientException with typed hierarchy:
    • ApiException, ValidationException, RateLimitException, TimeoutException, AbortedException.
  • Dependencies: Removed freezed, json_serializable; now minimal (http, logging).
  • Base URL: No longer needs /api suffix — just use http://localhost:11434.

See MIGRATION.md for step-by-step examples and mapping tables.

Commits #

  • BREAKING FEAT: Complete v1.0.0 reimplementation (#1). (0ff41032)
  • FEAT: Add type-safe enums and sealed classes (#22). (73372581)
  • FIX: Pre-release documentation and code fixes (#45). (b33ae6d5)
  • REFACTOR: Align client package architecture across SDK packages (#37). (cf741ee1)
  • REFACTOR: Align API surface across all SDK packages (#36). (ed969cc7)
  • REFACTOR: Extract streaming helpers to StreamingResource mixin (#2). (0b6f0ed9)
  • DOCS: Refactors repository URLs to new location. (76835268)

0.3.0+1 #

0.3.0 #

Note: This release has breaking changes.

0.2.5 #

0.2.4 #

0.2.3 #

  • FEAT: Add think/thinking params to ollama_dart (#721). (701d7968)
  • FEAT: Add capabilities, projector_info, tensors and modified_at to Ollama's ModelInfo (#690). (c5e247db)
  • FEAT: Update dependencies (requires Dart 3.6.0) (#709). (9e3467f7)
  • REFACTOR: Remove fetch_client dependency in favor of http v1.3.0 (#659). (0e0a685c)
  • FIX: Fix linter issues (#656). (88a79c65)

0.2.2+1 #

  • REFACTOR: Add new lint rules and fix issues (#621). (60b10e00)
  • REFACTOR: Upgrade api clients generator version (#610). (0c8750e8)

0.2.2 #

  • FEAT: Update Ollama default model to llama-3.2 (#554). (f42ed0f0)

0.2.1 #

0.2.0 #

Note: This release has breaking changes.

  • FEAT: Add tool calling support (#504). (1ffdb41b)
  • BREAKING FEAT: Update Ollama default model to llama-3.1 (#506). (b1134bf1)
  • FEAT: Add support for Ollama version and model info (#488). (a110ecb7)
  • FEAT: Add suffix support in Ollama completions API (#503). (30d05a69)
  • BREAKING REFACTOR: Change Ollama push model status type from enum to String (#489). (90c9ccd9)
  • DOCS: Update Ollama request options default values in API docs (#479). (e1f93366)

0.1.2 #

  • FEAT: Add support for listing running Ollama models (#451). (cfaa31fb)
  • REFACTOR: Migrate conditional imports to js_interop (#453). (a6a78cfe)

0.1.1 #

  • FEAT: Support buffered stream responses (#445). (ce2ef30c)
  • FIX: Fix deserialization of sealed classes (#435). (7b9cf223)

0.1.0+1 #

  • FIX: digest path param in Ollama blob endpoints (#430). (2e9e935a)

0.1.0 #

Note: This release has breaking changes.

  • BREAKING FEAT: Align Ollama client to the Ollama v0.1.36 API (#411). (326212ce)
  • FEAT: Update Ollama default model from llama2 to llama3 (#417). (9d30b1a1)
  • FEAT: Add support for done reason (#413). (cc5b1b02)

0.0.3+1 #

  • FIX: Have the == implementation use Object instead of dynamic (#334). (89f7b0b9)

0.0.3 #

  • FEAT: Add Ollama keep_alive param to control how long models stay loaded (#319). (3b86e227)
  • FEAT: Update meta and test dependencies (#331). (912370ee)
  • DOCS: Update pubspecs. (d23ed89a)

0.0.2+1 #

0.0.2 #

  • FEAT: Add support for chat API and multi-modal LLMs (#274). (76e1a294)

0.0.1+2 #

  • FIX: Fetch web requests with big payloads dropping connection (#273). (425889dc)

0.0.1+1 #

0.0.1 #

  • FEAT: Implement ollama_dart, a Dart client for Ollama API (#238). (d213aa9c)

0.0.1-dev.1 #

  • Bootstrap project.
82
likes
160
points
2.98k
downloads

Documentation

Documentation
API reference

Publisher

verified publisherdavidmiguel.com

Weekly Downloads

Dart client for the Ollama API to run LLMs locally (OpenAI gpt-oss, DeepSeek-R1, Gemma 3, Llama 4, and more).

Homepage
Repository (GitHub)
View/report issues

Topics

#nlp #gen-ai #llms #ollama

Funding

Consider supporting this project:

github.com

License

MIT (license)

Dependencies

http, logging, meta

More

Packages that depend on ollama_dart