Provider Helpers
Generated from yosoi
v0.0.1a11. Only symbols in__all__are listed.
alibaba
alibaba(model_name: str, api_key: str | None = None, kwargs: Any = {}) -> LLMConfig
Quick config for Alibaba Cloud DashScope. Args:
model_namestr— DashScope model identifier (e.g. ‘qwen-plus’, ‘qwen-max’)api_keystr | None— DashScope API key. If omitted, reads from DASHSCOPE_API_KEY or ALIBABA_API_KEY.**kwargsAny— Additional LLMConfig fields.
Returns: LLMConfig — Configured LLMConfig for Alibaba DashScope.
anthropic
anthropic(model_name: str, api_key: str | None = None, kwargs: Any = {}) -> LLMConfig
Quick config for Anthropic (Claude). Args:
model_namestr— Model identifier (e.g. ‘claude-opus-4-5’, ‘claude-sonnet-4-6’)api_keystr | None— Anthropic API key. If omitted, reads from ANTHROPIC_API_KEY.**kwargsAny— Additional LLMConfig fields.
Returns: LLMConfig — Configured LLMConfig for Anthropic.
azure
azure(model_name: str, api_key: str | None = None, kwargs: Any = {}) -> LLMConfig
Quick config for Azure OpenAI.
Supply azure_endpoint and optionally api_version via extra_params.
Args:
model_namestr— Azure deployment name (e.g. ‘gpt-4o’)api_keystr | None— Azure OpenAI API key. If omitted, reads from AZURE_OPENAI_API_KEY.**kwargsAny— Additional LLMConfig fields.
Returns: LLMConfig — Configured LLMConfig for Azure OpenAI.
bedrock
bedrock(model_name: str, api_key: str | None = None, kwargs: Any = {}) -> LLMConfig
Quick config for AWS Bedrock.
api_key maps to aws_access_key_id. Supply aws_secret_access_key
and region_name via extra_params, or let boto3 resolve credentials
from the environment.
Args:
model_namestr— Bedrock model ARN or ID (e.g. ‘anthropic.claude-3-5-sonnet-20241022-v2:0’)api_keystr | None— AWS access key ID. If omitted, reads from AWS_ACCESS_KEY_ID.**kwargsAny— Additional LLMConfig fields.
Returns: LLMConfig — Configured LLMConfig for AWS Bedrock.
cerebras
cerebras(model_name: str, api_key: str | None = None, kwargs: Any = {}) -> LLMConfig
Quick config for Cerebras. Args:
model_namestr— Cerebras model identifier (e.g. ‘llama-3.3-70b’)api_keystr | None— Cerebras API key. If omitted, reads from CEREBRAS_API_KEY or CEREBRAS_KEY.**kwargsAny— Additional LLMConfig fields.
Returns: LLMConfig — Configured LLMConfig for Cerebras.
deepseek
deepseek(model_name: str, api_key: str | None = None, kwargs: Any = {}) -> LLMConfig
Quick config for DeepSeek. Args:
model_namestr— DeepSeek model identifier (e.g. ‘deepseek-chat’, ‘deepseek-reasoner’)api_keystr | None— DeepSeek API key. If omitted, reads from DEEPSEEK_API_KEY.**kwargsAny— Additional LLMConfig fields.
Returns: LLMConfig — Configured LLMConfig for DeepSeek.
fireworks
fireworks(model_name: str, api_key: str | None = None, kwargs: Any = {}) -> LLMConfig
Quick config for Fireworks AI. Args:
model_namestr— Fireworks model identifier (e.g. ‘accounts/fireworks/models/llama-v3p3-70b-instruct’)api_keystr | None— Fireworks API key. If omitted, reads from FIREWORKS_API_KEY.**kwargsAny— Additional LLMConfig fields.
Returns: LLMConfig — Configured LLMConfig for Fireworks.
gemini
gemini(model_name: str, api_key: str | None = None, kwargs: Any = {}) -> LLMConfig
Quick config for Gemini (Google). Args:
model_namestr— Gemini model identifier (e.g. ‘gemini-2.0-flash’)api_keystr | None— Google API key. If omitted, reads from GEMINI_API_KEY, GEMINI_KEY, or GOOGLE_API_KEY.**kwargsAny— Additional LLMConfig fields.
Returns: LLMConfig — Configured LLMConfig for Gemini.
github
github(model_name: str, api_key: str | None = None, kwargs: Any = {}) -> LLMConfig
Quick config for GitHub Models. Args:
model_namestr— GitHub Models identifier (e.g. ‘gpt-4o’, ‘Llama-3.3-70B-Instruct’)api_keystr | None— GitHub token. If omitted, reads from GITHUB_TOKEN.**kwargsAny— Additional LLMConfig fields.
Returns: LLMConfig — Configured LLMConfig for GitHub Models.
grok
grok(model_name: str, api_key: str | None = None, kwargs: Any = {}) -> LLMConfig
Quick config for Grok via xAI’s OpenAI-compatible endpoint. Args:
model_namestr— Grok model identifier (e.g. ‘grok-3’, ‘grok-3-mini’)api_keystr | None— xAI API key. If omitted, reads from XAI_API_KEY or GROK_API_KEY.**kwargsAny— Additional LLMConfig fields.
Returns: LLMConfig — Configured LLMConfig for Grok.
groq
groq(model_name: str, api_key: str | None = None, kwargs: Any = {}) -> LLMConfig
Quick config for Groq. Args:
model_namestr— Groq model identifier (e.g. ‘llama-3.3-70b-versatile’)api_keystr | None— Groq API key. If omitted, reads from GROQ_API_KEY or GROQ_KEY.**kwargsAny— Additional LLMConfig fields.
Returns: LLMConfig — Configured LLMConfig for Groq.
heroku
heroku(model_name: str, api_key: str | None = None, kwargs: Any = {}) -> LLMConfig
Quick config for Heroku Managed Inference. Args:
model_namestr— Heroku model identifier (e.g. ‘claude-3-5-sonnet’)api_keystr | None— Heroku inference key. If omitted, reads from HEROKU_INFERENCE_KEY.**kwargsAny— Additional LLMConfig fields.
Returns: LLMConfig — Configured LLMConfig for Heroku.
huggingface
huggingface(model_name: str, api_key: str | None = None, kwargs: Any = {}) -> LLMConfig
Quick config for HuggingFace Inference API. Args:
model_namestr— HuggingFace model ID (e.g. ‘Qwen/Qwen3-235B-A22B’)api_keystr | None— HF token. If omitted, reads from HF_TOKEN or HUGGINGFACE_API_KEY.**kwargsAny— Additional LLMConfig fields (e.g. extra_params={‘provider_name’: ‘nebius’}).
Returns: LLMConfig — Configured LLMConfig for HuggingFace.
litellm
litellm(model_name: str, api_key: str | None = None, kwargs: Any = {}) -> LLMConfig
Quick config for LiteLLM proxy.
Supply api_base via extra_params to point at your LiteLLM proxy
endpoint.
Args:
model_namestr— Model identifier passed through to LiteLLMapi_keystr | None— API key for the proxied provider. If omitted, reads from LITELLM_API_KEY.**kwargsAny— Additional LLMConfig fields.
Returns: LLMConfig — Configured LLMConfig for LiteLLM.
mistral
mistral(model_name: str, api_key: str | None = None, kwargs: Any = {}) -> LLMConfig
Quick config for Mistral. Args:
model_namestr— Mistral model identifier (e.g. ‘mistral-large-latest’)api_keystr | None— Mistral API key. If omitted, reads from MISTRAL_API_KEY.**kwargsAny— Additional LLMConfig fields.
Returns: LLMConfig — Configured LLMConfig for Mistral.
moonshotai
moonshotai(model_name: str, api_key: str | None = None, kwargs: Any = {}) -> LLMConfig
Quick config for MoonshotAI (Kimi). Args:
model_namestr— Moonshot model identifier (e.g. ‘kimi-k2-0711-preview’)api_keystr | None— Moonshot API key. If omitted, reads from MOONSHOT_API_KEY.**kwargsAny— Additional LLMConfig fields.
Returns: LLMConfig — Configured LLMConfig for MoonshotAI.
nebius
nebius(model_name: str, api_key: str | None = None, kwargs: Any = {}) -> LLMConfig
Quick config for Nebius AI Studio. Args:
model_namestr— Nebius model identifier (e.g. ‘Qwen/Qwen3-235B-A22B-fast’)api_keystr | None— Nebius API key. If omitted, reads from NEBIUS_API_KEY.**kwargsAny— Additional LLMConfig fields.
Returns: LLMConfig — Configured LLMConfig for Nebius.
ollama
ollama(model_name: str, kwargs: Any = {}) -> LLMConfig
Quick config for Ollama (local).
No API key required. Supply base_url via extra_params to override
the default http://localhost:11434.
Args:
model_namestr— Ollama model tag (e.g. ‘llama3’, ‘mistral’, ‘qwen2.5’)**kwargsAny— Additional LLMConfig fields.
Returns: LLMConfig — Configured LLMConfig for Ollama.
openai
openai(model_name: str, api_key: str | None = None, kwargs: Any = {}) -> LLMConfig
Quick config for OpenAI. Args:
model_namestr— OpenAI model identifier (e.g. ‘gpt-4o’, ‘gpt-4o-mini’)api_keystr | None— OpenAI API key. If omitted, reads from OPENAI_API_KEY or OPENAI_KEY.**kwargsAny— Additional LLMConfig fields.
Returns: LLMConfig — Configured LLMConfig for OpenAI.
openrouter
openrouter(model_name: str, api_key: str | None = None, kwargs: Any = {}) -> LLMConfig
Quick config for OpenRouter. Args:
model_namestr— OpenRouter model identifier (e.g. ‘meta-llama/llama-3.3-70b-instruct:free’)api_keystr | None— OpenRouter API key. If omitted, reads from OPENROUTER_API_KEY or OPENROUTER_KEY.**kwargsAny— Additional LLMConfig fields.
Returns: LLMConfig — Configured LLMConfig for OpenRouter.
ovhcloud
ovhcloud(model_name: str, api_key: str | None = None, kwargs: Any = {}) -> LLMConfig
Quick config for OVHcloud AI Endpoints. Args:
model_namestr— OVHcloud model identifierapi_keystr | None— OVH access token. If omitted, reads from OVH_AI_ENDPOINTS_ACCESS_TOKEN.**kwargsAny— Additional LLMConfig fields.
Returns: LLMConfig — Configured LLMConfig for OVHcloud.
provider
provider(model_string: str, api_key: str | None = None, kwargs: Any = {}) -> LLMConfig
Create an LLM config from a single model string.
This is the recommended, unified way to configure a model. The provider is parsed from the model string automatically.
Preferred format uses : as the separator::
import yosoi as ys
config = ys.provider('groq:llama-3.3-70b-versatile')config = ys.provider('openrouter:meta-llama/llama-3.3-70b-instruct:free')config = ys.provider('gemini:gemini-2.0-flash')config = ys.provider('anthropic:claude-opus-4-5')config = ys.provider('deepseek:deepseek-chat')config = ys.provider('ollama:llama3')The provider/model format is also supported for known providers::
config = ys.provider('groq/llama-3.3-70b-versatile')Args:
model_stringstr— Model identifier inprovider:model-nameformat.api_keystr | None— Explicit API key. If omitted, resolved from environment.**kwargsAny— Additional LLMConfig fields (temperature, max_tokens, etc.)
Returns: LLMConfig — Configured LLMConfig instance.
Raises:
ValueError— If the provider cannot be determined.
sambanova
sambanova(model_name: str, api_key: str | None = None, kwargs: Any = {}) -> LLMConfig
Quick config for SambaNova. Args:
model_namestr— SambaNova model identifier (e.g. ‘Meta-Llama-3.3-70B-Instruct’)api_keystr | None— SambaNova API key. If omitted, reads from SAMBANOVA_API_KEY.**kwargsAny— Additional LLMConfig fields.
Returns: LLMConfig — Configured LLMConfig for SambaNova.
together
together(model_name: str, api_key: str | None = None, kwargs: Any = {}) -> LLMConfig
Quick config for Together AI. Args:
model_namestr— Together model identifier (e.g. ‘meta-llama/Llama-3-70b-chat-hf’)api_keystr | None— Together API key. If omitted, reads from TOGETHER_API_KEY.**kwargsAny— Additional LLMConfig fields.
Returns: LLMConfig — Configured LLMConfig for Together AI.
vercel
vercel(model_name: str, api_key: str | None = None, kwargs: Any = {}) -> LLMConfig
Quick config for Vercel AI. Args:
model_namestr— Vercel AI model identifierapi_keystr | None— Vercel API key. If omitted, reads from AI_SDK_KEY or VERCEL_API_KEY.**kwargsAny— Additional LLMConfig fields.
Returns: LLMConfig — Configured LLMConfig for Vercel AI.
vertexai
vertexai(model_name: str, kwargs: Any = {}) -> LLMConfig
Quick config for Google Vertex AI.
No API key required — uses GCP application default credentials or a
service account file supplied via extra_params.
Args:
model_namestr— Vertex AI model ID (e.g. ‘gemini-2.0-flash-001’)**kwargsAny— Additional LLMConfig fields (e.g. extra_params={‘project_id’: ’…’, ‘region’: ‘us-east1’}).
Returns: LLMConfig — Configured LLMConfig for Google Vertex AI.
xai
xai(model_name: str, api_key: str | None = None, kwargs: Any = {}) -> LLMConfig
Quick config for xAI (Grok models via native xAI client). Args:
model_namestr— xAI model identifier (e.g. ‘grok-3’, ‘grok-3-mini’)api_keystr | None— xAI API key. If omitted, reads from XAI_API_KEY.**kwargsAny— Additional LLMConfig fields.
Returns: LLMConfig — Configured LLMConfig for xAI.