There is a set of Raku modules that leverage LLMs for different tasks (mostly code generation) using different techniques:

- https://raku.land/zef:antononcube/LLM::Resources : Uses agentic LLM-graphs with asynchronous execution

- https://raku.land/zef:antononcube/ML::FindTextualAnswer : Finds answers to questions over provided texts (e.g. natural language code generation commands)

- https://raku.land/zef:antononcube/ML::NLPTemplateEngine : Fills-in predefined code templates based on natural language code descriptions/commands

- https://raku.land/zef:antononcube/DSL::Examples : Example translations of natural language commands to executable code

apogee8 hours ago | | | parent | | on: 47766725
I've got a few LLM modules too, mostly for handling context management:

- https://raku.land/zef:apogee/LLM::Character implements CCv3 which is a standard for managing characters (system prompts) and lorebooks (injected snippets)

- https://raku.land/zef:apogee/LLM::Chat handles context shifting for long contexts, sampler settings, templating for text completion & inferencing with or without streaming using supply/tap

- https://raku.land/zef:apogee/LLM::Data::Inference adds retries, JSON parsing & multi-model route handling to LLM::Chat

- https://raku.land/zef:apogee/LLM::Data::Pipeline allows you to declaratively build multi-step pipelines (simple agentic LLM use)

- https://raku.land/zef:apogee/HuggingFace::API is a partial wrapper around HF API for grabbing tokenizers.json & tokenizer_config.json

- https://raku.land/zef:apogee/Template::Jinja2 is a near-complete impl of Jinja2 for parsing LLM text completion templates (can be used for anything you'd use Jinja2 for)

- https://raku.land/zef:apogee/Tokenizers is a thin wrapper around HF tokenizers, for token counting mostly