tools
Unified API for LLM function calling and tool use across OpenAI, Anthropic, Google, and Ollama with standardized schema definitions and execution patterns.
Introduction
The Tools skill provides a unified, provider-agnostic interface for implementing function calling and agentic tool use in LLM applications. Designed to solve the fragmentation of tool-calling standards across different AI providers, it allows developers to define functions once using JSON Schema and execute them seamlessly against models from OpenAI, Anthropic, Google Gemini, and Ollama. This abstraction layer handles the nuances of provider-specific parameters like 'tool_choice' and translation of multi-turn tool execution patterns, significantly reducing the boilerplate required to build robust AI agents. It is intended for software engineers and AI architects who need to create consistent, interoperable toolsets that remain functional regardless of the underlying LLM provider.
-
Standardized JSON Schema format for defining tool functions, descriptions, and parameter properties.
-
Unified LLMRequest and LLMResponse structures that encapsulate tool call requests and execution feedback.
-
Automatic translation of tool selection strategies, mapping high-level instructions to provider-specific behaviors like 'auto', 'none', or 'required'.
-
Built-in support for the multi-turn tool execution pattern, including the 'tool' role for result messages and correlated tool call IDs.
-
Streaming support for tool-calling workflows, allowing for real-time interaction with tool-enabled models.
-
Simplified management of tool result messages, ensuring valid JSON strings are correctly associated with specific tool call IDs for the LLM.
-
The skill expects input as a list of tool definitions containing name, description, and JSON schema-compliant parameters.
-
Output involves the agent returning a structured tool call object, which the host system executes, followed by the submission of the results back to the LLM service.
-
When using tool selection, ensure that the model is provided with the correct 'tool_choice' configuration to toggle between forced, specific, or automatic mode.
-
Practical constraints include adherence to provider-specific token limits and model capabilities regarding function calling, which the tool abstracts where possible but does not fundamentally alter.
-
Ideal for building autonomous agents, data retrieval pipelines, and interactive systems that require external knowledge or actions from a LLM.
-
Always ensure local SDKs for desired providers (e.g., openai, anthropic, google-genai, ollama) are installed alongside llmring to enable the underlying capabilities.
Repository Stats
- Stars
- 3
- Forks
- 0
- Open Issues
- 0
- Language
- Python
- Default Branch
- main
- Sync Status
- Idle
- Last Synced
- May 4, 2026, 12:48 AM