Function Repository Resource:
Ollama Synthesize
Interact with local AI/LLM models via an Ollama server
ResourceFunction["OllamaSynthesize"][prompt] generates a AI model response for the given prompt. | |
ResourceFunction["OllamaSynthesize"][prompt,images] generates a AI model response for the given prompt and list of images. | |
ResourceFunction["OllamaSynthesize"][list] generates a AI model response for the prompts and images in the list. |
Details and Options
Examples
Basic Examples (5) 
Try a basic question:
In[1]:= |
Out[1]= |
Ask a question about an image:
In[2]:= |
Out[2]= |
Ask another question about a different image:
In[3]:= |
Out[3]= |
A similar question with a different vision-enabled LLM model:
In[4]:= |
Out[4]= |
Mix the question and image(s) in a single list:
In[5]:= |
Out[5]= |
Options (2) 
The default model is "Llava". Use the "OllamaModel" option to specify another one:
In[6]:= |
Out[6]= |
If you specify a model that does not exist or a model that you did not download locally, then an error is raised:
In[7]:= |
Out[7]= |
Possible Issues (1) 
You are responsible for installing Ollama on your machine and for the downloading of the models you wish to use. To list the available models, you can run the following (undocumented) command:
In[8]:= |
Out[8]= |