ollama_dart 1.4.1
ollama_dart: ^1.4.1 copied to clipboard
Dart client for the Ollama API to run LLMs locally (OpenAI gpt-oss, DeepSeek-R1, Gemma 3, Llama 4, and more).
1.4.1 #
1.4.0 #
1.3.0 #
1.2.1 #
1.2.0 #
Added baseUrl and defaultHeaders parameters to withApiKey constructors, fixed hashCode for list fields, and unified equality helpers.
1.1.0 #
1.0.0 #
Note: This release has breaking changes.
TL;DR: Complete reimplementation with a new architecture, minimal dependencies, resource-based API, and improved developer experience. Hand-crafted models (no code generation), interceptor-driven architecture, comprehensive error handling, and full Ollama API coverage.
What's new #
- Resource-based API organization:
client.chat— Chat completions (multi-turn conversations)client.completions— Text generation (single-turn)client.embeddings— Generate text embeddingsclient.models— Model management (list, show, pull, push, copy, delete, create, ps)client.version— Server version info
- Architecture:
- Interceptor chain (Auth → Logging → Error → Transport with Retry wrapper).
- Authentication: Bearer token or custom via
AuthProviderinterface. - Retry with exponential backoff + jitter (only for idempotent methods on 429, 5xx, timeouts).
- Abortable requests via
abortTriggerparameter. - NDJSON streaming parser for real-time responses.
- Central
OllamaConfig(timeouts, retry policy, log level, baseUrl, auth).
- Hand-crafted models:
- No code generation dependencies (no freezed, json_serializable).
- Minimal runtime dependencies (
http,loggingonly). - Immutable models with
copyWithusing sentinel pattern. - Full type safety with sealed exception hierarchy.
- Type-safe enums and sealed classes:
DoneReasonenum for completion stop reasons (stop,length,load,unload).ThinkValuesealed class for thinking mode (ThinkEnabled,ThinkWithLevel).ResponseFormatsealed class for format options (JsonFormat,SchemaFormat).MessageRoleenum for message roles (system,user,assistant,tool).
- Improved DX:
- Simplified model names (e.g.,
ChatRequestinstead ofGenerateChatCompletionRequest). - Named constructors for common patterns (e.g.,
ChatMessage.user(),ChatMessage.system()). - Explicit streaming methods (
createStream()vscreate()). - Rich logging with field redaction for sensitive data.
- Simplified model names (e.g.,
- Full API coverage:
- Chat completions with tool calling support.
- Text completions with thinking mode.
- Embeddings generation.
- Model management (list, show, create, copy, delete, pull, push, ps).
- Server version info.
Breaking Changes #
- Resource-based API: Methods reorganized under strongly-typed resources:
client.generateChatCompletion()→client.chat.create()client.generateChatCompletionStream()→client.chat.createStream()client.generateCompletion()→client.completions.generate()client.generateCompletionStream()→client.completions.generateStream()client.generateEmbedding()→client.embeddings.create()client.listModels()→client.models.list()client.showModelInfo()→client.models.show()client.listRunningModels()→client.models.ps()client.getVersion()→client.version.get()
- Model class renames:
GenerateChatCompletionRequest→ChatRequestGenerateChatCompletionResponse→ChatResponseGenerateCompletionRequest→GenerateRequestGenerateCompletionResponse→GenerateResponseGenerateEmbeddingRequest→EmbedRequestGenerateEmbeddingResponse→EmbedResponseMessage→ChatMessageTool→ToolDefinitionRequestOptions→ModelOptions
- Configuration: New
OllamaConfigwithAuthProviderpattern. - Exceptions: Replaced
OllamaClientExceptionwith typed hierarchy:ApiException,ValidationException,RateLimitException,TimeoutException,AbortedException.
- Dependencies: Removed
freezed,json_serializable; now minimal (http,logging). - Base URL: No longer needs
/apisuffix — just usehttp://localhost:11434.
See MIGRATION.md for step-by-step examples and mapping tables.
Commits #
- BREAKING FEAT: Complete v1.0.0 reimplementation (#1). (0ff41032)
- FEAT: Add type-safe enums and sealed classes (#22). (73372581)
- FIX: Pre-release documentation and code fixes (#45). (b33ae6d5)
- REFACTOR: Align client package architecture across SDK packages (#37). (cf741ee1)
- REFACTOR: Align API surface across all SDK packages (#36). (ed969cc7)
- REFACTOR: Extract streaming helpers to StreamingResource mixin (#2). (0b6f0ed9)
- DOCS: Refactors repository URLs to new location. (76835268)
0.3.0 #
Note: This release has breaking changes.
- FEAT: Enhance CreateModelRequest with new fields (#802). (c5c73549)
- FEAT: Add tool_name and index support (#800). (f0f77286)
- FEAT: Add remote_model and remote_host support (#799). (36b9d5f2)
- FEAT: Add truncate and shift support (#798). (098a0815)
- FEAT: Support high, medium, low for think (#797). (1cbe3fcf)
- FEAT: Support JSON schema in ResponseFormat (#796). (2f399465)
- FEAT: Upgrade to http v1.5.0 (#785). (f7c87790)
- BREAKING REFACTOR: Improve factory names (#806). (fbfa7acb)
- BREAKING BUILD: Require Dart >=3.8.0 (#792). (b887f5c6)
0.2.3 #
- FEAT: Add think/thinking params to ollama_dart (#721). (701d7968)
- FEAT: Add capabilities, projector_info, tensors and modified_at to Ollama's ModelInfo (#690). (c5e247db)
- FEAT: Update dependencies (requires Dart 3.6.0) (#709). (9e3467f7)
- REFACTOR: Remove fetch_client dependency in favor of http v1.3.0 (#659). (0e0a685c)
- FIX: Fix linter issues (#656). (88a79c65)
0.2.2+1 #
0.2.0 #
Note: This release has breaking changes.
- FEAT: Add tool calling support (#504). (1ffdb41b)
- BREAKING FEAT: Update Ollama default model to llama-3.1 (#506). (b1134bf1)
- FEAT: Add support for Ollama version and model info (#488). (a110ecb7)
- FEAT: Add suffix support in Ollama completions API (#503). (30d05a69)
- BREAKING REFACTOR: Change Ollama push model status type from enum to String (#489). (90c9ccd9)
- DOCS: Update Ollama request options default values in API docs (#479). (e1f93366)
0.1.2 #
0.1.1 #
0.1.0 #
0.0.3 #
0.0.1-dev.1 #
- Bootstrap project.