Crate llama_core

Source
Expand description

Llama Core, abbreviated as llama-core, defines a set of APIs. Developers can utilize these APIs to build applications based on large models, such as chatbots, RAG, and more.

Re-exports§

pub use error::LlamaCoreError;
pub use graph::EngineType;
pub use graph::Graph;
pub use graph::GraphBuilder;
pub use metadata::ggml::GgmlMetadata;
pub use metadata::ggml::GgmlTtsMetadata;
pub use metadata::piper::PiperMetadata;
pub use metadata::BaseMetadata;

Modules§

audio
chat
Define APIs for chat completion.
completions
Define APIs for completions.
embeddings
Define APIs for computing embeddings.
error
Error types for the Llama Core library.
files
Define APIs for file operations.
graph
Define Graph and GraphBuilder APIs for creating a new computation graph.
images
Define APIs for image generation and edit.
metadata
Define the types for model metadata.
models
Define APIs for querying models.
ragrag
Define APIs for RAG operations.
searchsearch
Define APIs for web search operations.
tts
utils
Define utility functions.

Structs§

PluginInfo
Version info of the wasi-nn_ggml plugin, including the build number and the commit id.

Enums§

StableDiffusionTask
The task type of the stable diffusion context

Constants§

ARCHIVES_DIR
The directory for storing the archives in wasm virtual file system.

Functions§

get_plugin_info
Get the plugin info
init_ggml_chat_context
Initialize the ggml context
init_ggml_embeddings_context
Initialize the ggml context
init_ggml_rag_contextrag
Initialize the ggml context for RAG scenarios.
init_ggml_tts_context
Initialize the ggml context for TTS scenarios.
init_piper_context
Initialize the piper context
init_sd_context_with_full_model
Initialize the stable-diffusion context with the given full diffusion model
init_sd_context_with_standalone_model
Initialize the stable-diffusion context with the given standalone diffusion model
init_whisper_contextwhisper
Initialize the whisper context
running_mode
Return the current running mode.