Expand description
Llama Core, abbreviated as llama-core
, defines a set of APIs. Developers can utilize these APIs to build applications based on large models, such as chatbots, RAG, and more.
Re-exports§
pub use error::LlamaCoreError;
pub use graph::EngineType;
pub use graph::Graph;
pub use graph::GraphBuilder;
pub use metadata::ggml::GgmlMetadata;
pub use metadata::ggml::GgmlTtsMetadata;
pub use metadata::piper::PiperMetadata;
pub use metadata::BaseMetadata;
Modules§
- audio
- chat
- Define APIs for chat completion.
- completions
- Define APIs for completions.
- embeddings
- Define APIs for computing embeddings.
- error
- Error types for the Llama Core library.
- files
- Define APIs for file operations.
- graph
- Define Graph and GraphBuilder APIs for creating a new computation graph.
- images
- Define APIs for image generation and edit.
- metadata
- Define the types for model metadata.
- models
- Define APIs for querying models.
- rag
rag
- Define APIs for RAG operations.
- search
search
- Define APIs for web search operations.
- tts
- utils
- Define utility functions.
Structs§
- Plugin
Info - Version info of the
wasi-nn_ggml
plugin, including the build number and the commit id.
Enums§
- Stable
Diffusion Task - The task type of the stable diffusion context
Constants§
- ARCHIVES_
DIR - The directory for storing the archives in wasm virtual file system.
Functions§
- get_
plugin_ info - Get the plugin info
- init_
ggml_ chat_ context - Initialize the ggml context
- init_
ggml_ embeddings_ context - Initialize the ggml context
- init_
ggml_ rag_ context rag
- Initialize the ggml context for RAG scenarios.
- init_
ggml_ tts_ context - Initialize the ggml context for TTS scenarios.
- init_
piper_ context - Initialize the piper context
- init_
sd_ context_ with_ full_ model - Initialize the stable-diffusion context with the given full diffusion model
- init_
sd_ context_ with_ standalone_ model - Initialize the stable-diffusion context with the given standalone diffusion model
- init_
whisper_ context whisper
- Initialize the whisper context
- running_
mode - Return the current running mode.