Function Repository Resource:
Ollama Synthesize
Interact with local AI/LLM models via an Ollama server
ResourceFunction["OllamaSynthesize"][prompt] generates a AI model response for the given prompt. | |
ResourceFunction["OllamaSynthesize"][prompt,images] generates a AI model response for the given prompt and list of images. | |
ResourceFunction["OllamaSynthesize"][list] generates a AI model response for the prompts and images in the list. |
Details and Options
Examples
Basic Examples (5) 
Try a basic question:
In[1]:= |
Out[1]= |
Ask a question about an image:
In[2]:= |
Out[2]= |
Ask another question about a different image:
In[3]:= |
Out[3]= |
A similar question with a different vision-enabled LLM model:
In[4]:= |
Out[4]= |
Mix the question and image(s) in a single list:
In[5]:= |
Out[5]= |
Scope (1) 
Solve basic math problems with step by step instructions:
In[6]:= |
Out[6]= |
Options (3) 
The default model is "Llava". Use the "OllamaModel" option to specify another one:
In[7]:= |
Out[7]= |
Larger models work too, but are slower. Also your machine will need to have sufficient GPU memory to run the model:
In[8]:= |
Out[8]= |
If you specify a model that does not exist or a model that you did not download locally, then an error is raised:
In[9]:= |
Out[9]= |
Possible Issues (3) 
Repeated calls to OllamaSynthesize will give slightly randomized results and sometimes these results can be wildly incorrect:
In[10]:= |
Out[10]= |
You can use Solve to do this correctly. The result is given in hours:
In[11]:= |
Out[11]= |
You are responsible for installing Ollama on your machine and for the downloading of the models you wish to use. To list the available models, you can run the following (undocumented) command:
In[12]:= |
Out[12]= |