dart_rl 0.2.0-alpha.3
dart_rl: ^0.2.0-alpha.3 copied to clipboard
A simple Dart package implementing reinforcement learning algorithms (Q-Learning, SARSA, Expected-SARSA).
Changelog #
All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
0.2.0-alpha.3 - 2025-11-10 #
Added #
- Documentation for
DartRlActionconstructor andvalueproperty - Documentation for
DartRlStateconstructor andvalueproperty - Documentation for
DartRlAgentconstructor with parameter descriptions
Changed #
- Example structure: Restructured examples to follow Dart package layout guidelines
- Moved
frozen_lake_example.darttoexample/frozen_lake/main.dartwith its ownpubspec.yaml - Moved
grid_world_example.darttoexample/grid_world/main.dartwith its ownpubspec.yaml - Each example now has its own subdirectory with
pubspec.yamlthat depends on the parent package
- Moved
- Updated README.md to reflect new example structure and running instructions
0.2.0-alpha-1 - 2025-11-10 #
Changed #
- Major refactor: Simplified package to focus on pure Dart implementation
- Renamed
DartRLStatetoDartRlStatefor consistent naming convention - Renamed
DartRLActiontoDartRlActionfor consistent naming convention - Renamed
DartRLStateActiontoDartRlStateActionfor consistent naming convention - Split state and action classes into separate files (
state.dart,action.dart,state_action.dart) - Replaced Equatable-based equality with manual
operator ==andhashCodeimplementations - Updated package description to emphasize simplicity
- Simplified README with streamlined examples and documentation
- Updated version to 0.2.0-alpha.1
Removed #
- Flutter SDK dependency: Package is now pure Dart
- Dependencies: Removed
collection,equatable, andflutterpackages - Dev dependencies: Removed
pedanticpackage - Flutter integration: Removed
AgentNotifierandtrainStreamfunctionality - Flutter directory: Removed entire
lib/src/flutter/directory - Advanced features: Removed
DecaySchedule,TrainingStats, andQTableSerializerclasses - Flutter-specific files: Removed
agent_notifier.dart,agent_stream.dart - Additional files: Removed
decay_schedules.dart,serialization.dart,training_stats.dart - Flutter documentation: Removed Flutter integration sections from README
- Flutter examples: Removed
example/flutter_rl_demo/directory
Notes #
This version represents a significant simplification of the package, focusing on core reinforcement learning algorithms for pure Dart applications. Flutter support and advanced features have been removed to reduce complexity and dependencies.
0.1.0-alpha.2 - 2025-11-10 #
Added #
- Flutter SDK dependency for seamless Flutter integration
collectionpackage (^1.17.0) for enhanced data structuresequatablepackage (^2.0.5) for value equality in state and action classesAgentNotifierclass: ChangeNotifier wrapper for Flutter state management integrationtrainStreamextension method for reactive UI updates with stream-based training- Decay schedules:
LinearDecayScheduleandExponentialDecaySchedulefor epsilon decay TrainingStatsclass for tracking episode-level training metricsAggregatedStatsclass for computing statistics across multiple episodesQTableSerializerfor saving and loading trained Q-tables to/from disk- Comprehensive Flutter integration documentation in README with examples
- Complete Flutter demo app in
example/flutter_rl_demo/with real-time visualization - Examples for both stream-based training and ChangeNotifier pattern
- Documentation on decay schedules, training statistics, and model persistence
Features #
- Real-time training visualization for Flutter applications
- Stream-based training with reactive UI updates via
trainStream - Flutter state management integration through
AgentNotifier(ChangeNotifier pattern) - Compatible with Provider, Riverpod, and other Flutter state management solutions
- Configurable epsilon decay schedules (linear and exponential)
- Training statistics tracking with episode-level and aggregated metrics
- Q-table serialization for saving and loading trained agents
- Interactive Flutter example demonstrating real-time RL training visualization
- Support for training progress monitoring with episode, reward, steps, and epsilon tracking
- Non-blocking asynchronous training for smooth UI performance
Changed #
- Refactored
StatetoDartRLStatefor improved naming consistency - Refactored
ActiontoDartRLActionfor improved naming consistency - Refactored
StateActiontoDartRLStateActionfor improved naming consistency
0.1.0-alpha.1 2025-11-7 #
Added #
- Initial alpha release of dart_rl package
- Q-Learning algorithm implementation (
QLearningAgent) - SARSA algorithm implementation (
SarsaAgent) - Expected-SARSA algorithm implementation (
ExpectedSarsaAgent) Environmentinterface for creating custom RL environmentsAgentbase class with epsilon-greedy exploration strategyState,Action, andStateActionclasses for representing RL componentsStepResultclass for environment step results- Grid World example environment
- Frozen Lake example environment
- Comprehensive unit tests
- Documentation and README with usage examples
Features #
- Support for discrete state and action spaces
- Configurable learning rate (α), discount factor (γ), and epsilon (ε)
- Epsilon-greedy exploration with decay functionality
- Q-table access for inspection and debugging
- Training methods for single episodes and multiple episodes
- Compatible with both Dart and Flutter applications
0.1.0 2025-11-1 #
Added #
- Initial release of dart_rl package
- Q-Learning algorithm implementation (
QLearningAgent) - SARSA algorithm implementation (
SarsaAgent) - Expected-SARSA algorithm implementation (
ExpectedSarsaAgent) Environmentinterface for creating custom RL environmentsAgentbase class with epsilon-greedy exploration strategyState,Action, andStateActionclasses for representing RL componentsStepResultclass for environment step results- Grid World example environment
- Frozen Lake example environment
- Comprehensive unit tests
- Documentation and README with usage examples
Features #
- Support for discrete state and action spaces
- Configurable learning rate (?), discount factor (?), and epsilon (?)
- Epsilon-greedy exploration with decay functionality
- Q-table access for inspection and debugging
- Training methods for single episodes and multiple episodes
- Compatible with both Dart and Flutter applications