In[]:=
<<"maTHEMEatica`";SetColors["Solarized"];CreateStyleSheet[];ApplyStyleSheet[];SaveStyleSheet[];
»
Creating StyleSheet based on the colorscheme:
»
,,,,,,,
Stream 4 - 08-03-2024
Stream 4 - 08-03-2024
Started out this stream trying to implement a branch and bound algorithm to solve the knapsack problem (and making heavy use of Mathematica’s LinearOptimize).
Had a TON of success with that. Woohoo.
Ended out with a nice animation of a pendulum because why not?
Had a TON of success with that. Woohoo.
Ended out with a nice animation of a pendulum because why not?
Work:
Work:
First, let’s generate some fake data:
ClearAll[generateKnapsackProblem]generateKnapsackProblem[n_:100,krange_:{3,10000},wrange_:{3,10000},vrange_:{1,10000}]:=Module[{k,weights,values},k=RandomInteger[krange];weights=RandomInteger[wrange,n];values=RandomInteger[vrange,n];{n,k,values,weights}]
Let’s generate a good example problem:
In[]:=
{n,k,v,w}=generateKnapsackProblem[4,{100,100},{1,100},{1,100}]
Out[]=
{4,100,{38,86,80,23},{7,75,23,75}}
In[]:=
{n,k,v,w}={4,100,{38,86,80,23},{7,75,23,75}};
We want to maximize the value, but it’s more convenient to minimize the negative of the value:
In[]:=
objectivefun=-Sum[v[[i]]x[i],{i,1,n}]
Out[]=
-38x[1]-86x[2]-80x[3]-23x[4]
The x[i] are all to be taken from the set . We also have the knapsack “maximum weight” constraint:
{0,1}
In[]:=
Sum[w[[i]]x[i],{i,1,n}]<=k
Out[]=
7x[1]+75x[2]+23x[3]+75x[4]≤100
Let’s see what happens when we plug this into an optimization:
In[]:=
objectivefun=-Sum[v[[i]]x[i],{i,1,n}];constraints=Join[Flatten@Table[{x[i]>=0,x[i]<=1},{i,1,n}],{Sum[w[[i]]x[i],{i,1,n}]<=k}];vars=Table[x[i],{i,1,n}];iteration1=LinearOptimization[objectivefun,constraints,vars]
Out[]=
x[1]1,x[2],x[3]1,x[4]0
14
15
This would be perfect except for x[2] which is not an integer, however we’ve proved that the objective function is greater than or equal to this value:
In[]:=
(objectivefun/.iteration1)//N
Out[]=
-198.267
So we’ve proved that the objective has value greater than or equal to -198.267, we can find upper bounds by guessing and checking:
In[]:=
objectivefun/.{{x[1]->1,x[2]->1,x[3]->1,x[4]->0},{x[1]->1,x[2]->0,x[3]->1,x[4]->0}}
Out[]=
{-204,-118}
This -204 value isn’t a problem because the “maximum weight constraint” isn’t satisfied.
In[]:=
constraints[[-1]]/.{x[1]->1,x[2]->1,x[3]->1,x[4]->0}
Out[]=
False
We choose to branch on x[2], so now we have to solve two versions of the problem:
In[]:=
objectivefun=-Sum[v[[i]]x[i],{i,1,n}];constraints=Join[Flatten@Table[{x[i]>=0,x[i]<=1},{i,1,n}],{Sum[w[[i]]x[i],{i,1,n}]<=k}];vars=Table[x[i],{i,1,n}];constraints=Append[constraints,x[2]==0];myLinearMinimize[objectivefun_,constraints_,vars_]:={objectivefun/.#,#}&[LinearOptimization[objectivefun,constraints,vars]];iteration2=myLinearMinimize[objectivefun,constraints,vars]
Out[]=
-,x[1]1,x[2]0,x[3]1,x[4]
2092
15
14
15
In[]:=
-//N
2092
15
Out[]=
-139.467
In[]:=
objectivefun=-Sum[v[[i]]x[i],{i,1,n}];constraints=Join[Flatten@Table[{x[i]>=0,x[i]<=1},{i,1,n}],{Sum[w[[i]]x[i],{i,1,n}]<=k}];vars=Table[x[i],{i,1,n}];constraints=Join[constraints,{x[2]==0,x[4]==0}];myLinearMinimize[objectivefun_,constraints_,vars_]:={objectivefun/.#,#}&[LinearOptimization[objectivefun,constraints,vars]];iteration2=myLinearMinimize[objectivefun,constraints,vars]
Out[]=
{-118,{x[1]1,x[2]0,x[3]1,x[4]0}}
In[]:=
objectivefun=-Sum[v[[i]]x[i],{i,1,n}];constraints=Join[Flatten@Table[{x[i]>=0,x[i]<=1},{i,1,n}],{Sum[w[[i]]x[i],{i,1,n}]<=k}];vars=Table[x[i],{i,1,n}];constraints=Join[constraints,{x[2]==0,x[4]==1}];myLinearMinimize[objectivefun_,constraints_,vars_]:={objectivefun/.#,#}&[LinearOptimization[objectivefun,constraints,vars]];iteration2=myLinearMinimize[objectivefun,constraints,vars]
Out[]=
-,x[1]1,x[2]0,x[3],x[4]1
2843
23
18
23
In[]:=
objectivefun=-Sum[v[[i]]x[i],{i,1,n}];constraints=Join[Flatten@Table[{x[i]>=0,x[i]<=1},{i,1,n}],{Sum[w[[i]]x[i],{i,1,n}]<=k}];vars=Table[x[i],{i,1,n}];constraints=Join[constraints,{x[2]==0,x[4]==1,x[3]==0}];myLinearMinimize[objectivefun_,constraints_,vars_]:={objectivefun/.#,#}&[LinearOptimization[objectivefun,constraints,vars]];iteration2=myLinearMinimize[objectivefun,constraints,vars]
Out[]=
{-61,{x[1]1,x[2]0,x[3]0,x[4]1}}
In[]:=
objectivefun=-Sum[v[[i]]x[i],{i,1,n}];constraints=Join[Flatten@Table[{x[i]>=0,x[i]<=1},{i,1,n}],{Sum[w[[i]]x[i],{i,1,n}]<=k}];vars=Table[x[i],{i,1,n}];constraints=Join[constraints,{x[2]==0,x[4]==1,x[3]==1}];myLinearMinimize[objectivefun_,constraints_,vars_]:={objectivefun/.#,#}&[LinearOptimization[objectivefun,constraints,vars]];iteration2=myLinearMinimize[objectivefun,constraints,vars]
Out[]=
-,x[1],x[2]0,x[3]1,x[4]1
797
7
2
7
In[]:=
-//N
797
7
Out[]=
-113.857
In[]:=
objectivefun=-Sum[v[[i]]x[i],{i,1,n}];constraints=Join[Flatten@Table[{x[i]>=0,x[i]<=1},{i,1,n}],{Sum[w[[i]]x[i],{i,1,n}]<=k}];vars=Table[x[i],{i,1,n}];constraints=Join[constraints,{x[2]==1}];myLinearMinimize[objectivefun_,constraints_,vars_]:={objectivefun/.#,#}&[LinearOptimization[objectivefun,constraints,vars]];iteration2=myLinearMinimize[objectivefun,constraints,vars]
Out[]=
-,x[1]1,x[2]1,x[3],x[4]0
4292
23
18
23
In[]:=
-//N
4292
23
Out[]=
-186.609
In[]:=
objectivefun=-Sum[v[[i]]x[i],{i,1,n}];constraints=Join[Flatten@Table[{x[i]>=0,x[i]<=1},{i,1,n}],{Sum[w[[i]]x[i],{i,1,n}]<=k}];vars=Table[x[i],{i,1,n}];constraints=Join[constraints,{x[2]==1,x[3]==1,x[1]==1}];myLinearMinimize[objectivefun_,constraints_,vars_]:={objectivefun/.#,#}&[LinearOptimization[objectivefun,constraints,vars]];iteration2=myLinearMinimize[objectivefun,constraints,vars]
Okay, great, let’s do it automatically.
Okay, great, let’s do it automatically.
I want to be aware of evaluation order, so let’s program this procedural-style.
Additional constraints are going to be stored on a stack.
The best solution found is an upper bound (which is important because good upper bounds allow us to prune the search tree more!!) Best solution is going to be a global variable.
Best lower bound...?
The lower bound is basically a pruning criterion, so it doesn’t need to be global. But it tells us whether we can prune the whole branch and what variable to branch on.
Additional constraints are going to be stored on a stack.
The best solution found is an upper bound (which is important because good upper bounds allow us to prune the search tree more!!) Best solution is going to be a global variable.
Best lower bound...?
The lower bound is basically a pruning criterion, so it doesn’t need to be global. But it tells us whether we can prune the whole branch and what variable to branch on.
Let’s get a plot on how our algorithm scales with n:
This graph has two sources of randomness.
One: spikes in CPU usage on my personal computer.
Two: the input dataset. Sometimes our branch and bound algorithm makes really good choices for which branches to take. In that case the algorithm terminates quickly. Other times we’re not so lucky!
One: spikes in CPU usage on my personal computer.
Two: the input dataset. Sometimes our branch and bound algorithm makes really good choices for which branches to take. In that case the algorithm terminates quickly. Other times we’re not so lucky!
Speeding up the algorithm using numerics:
Speeding up the algorithm using numerics:
Right now, one reason that branchAndBoundMinimize is so slow (even though it’s much faster than many alternatives) is because it’s using exact fractional arithmetic. We can update it to use floating point arithmetic, and we won’t run into any problems unless we’re extremely unlucky!
So... Cool! This is way faster than at the beginning of the stream... I kind of don’t know what else to work on.
Sidetracks
Sidetracks
Reddit question
Reddit question
I can’t answer this question, but asking on https://community.wolfram.com/ might turn up good results!
“OK, for a reddit question, I want to find the shortest water-only path from Brisbane to Cardiff (just as an example). I considered using a 43200 x 21600 graph containing distances between 30 seconds-each pixel points and then only using water nodes.
I’m using Mathematica and I’d like to use Geography tools.”
I’m using Mathematica and I’d like to use Geography tools.”
Code that Claude spat out. It takes too long to run even with a lower resolution. But maybe it’s doing something intelligent!
Example Mathematica commands
Example Mathematica commands
FizzBuzz
FizzBuzz
generate a list of numbers from 1 to N
Print out “fizz” if n is a multiple of three
Print out “buzz” if n is a multiple of five
print out “fizzbuzz” if it’s a multiple of both
else print out n.
1
2
fizz
4
buzz
fizz
7
8
fizz
buzz
11
fizz
13
14
fizzbuzz!
Print out “fizz” if n is a multiple of three
Print out “buzz” if n is a multiple of five
print out “fizzbuzz” if it’s a multiple of both
else print out n.
1
2
fizz
4
buzz
fizz
7
8
fizz
buzz
11
fizz
13
14
fizzbuzz!
We need to end on a visualization!!!
We need to end on a visualization!!!
Woohoo! Pendulum gif!
http://mathandcode.com/share/08-03-2024/pendulum.gif
http://mathandcode.com/share/08-03-2024/pendulum.gif