Query
This module provides a command line interface for querying a language model using a LLM and a Faiss index of embeddings containing knowledge. It uses the Llama library to interact with the language model and the Faiss library to create and search the index.
Note
If an index_path is provided, the function loads the Faiss index and searches it for the closest embeddings to the question embedding. It then prompts the user with a message that includes the context of the closest embedding and the original question.
prompt_with_context()
Returns a prompt to request relevant text to answer a given question.
Returns:
Name | Type | Description |
---|---|---|
str |
str
|
A string prompt that includes the given context and question, and asks for relevant text. |
Source code in src/query.py
query(question=typer.Argument(Ellipsis, help='Question to answer.'), model_path=typer.Argument(Ellipsis, help='Folder containing the model.'), index_path=typer.Argument(None, help='Folder containing the vector store with the embeddings. If none provided, only LLM is used.'))
Ask a question to a Language Model (LLM) using an index of embeddings containing the knowledge.
If no index_path
is specified, it will only use the LLM to answer the question. Otherwise, it will use the
embeddings in the index_path
to find relevant text before prompting for an answer.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
question |
str
|
The question to answer. |
typer.Argument(Ellipsis, help='Question to answer.')
|
model_path |
str
|
The folder containing the LLM model. |
typer.Argument(Ellipsis, help='Folder containing the model.')
|
index_path |
Optional[Path]
|
The folder containing the vector store with the embeddings. If none provided, |
typer.Argument(None, help='Folder containing the vector store with the embeddings. If none provided, only LLM is used.')
|
Returns:
Name | Type | Description |
---|---|---|
None | The response will be printed to the console. |