Inhomogeneous CAs
Inhomogeneous CAs
[Like a feed-forward neural net]
In[]:=
ica[rules_,state_]:=MapIndexed[CellularAutomaton[rules[[First[#2]]]][#][[2]]&,RotateLeft[Partition[state,3,1,1],-1]]
In[]:=
cainf[ra_,init_]:=Fold[ica[#2,#1]&,init,ra]
In[]:=
cainflist[ra_,init_]:=FoldList[ica[#2,#1]&,init,ra]
In[]:=
icaplotshift[ra_,init_,opts___]:=ArrayPlot[MapThread[List,{Most[cainflist[ra,init]],ra},2],ColorRules->{{1,170}->Darker[Pink,.5],{0,170}->Lighter[Pink,.7],{1,240}->Darker[Yellow,.5],{0,240}->Lighter[Yellow,.7]}]
In[]:=
icaplotgen[{rules_,ra_},init_,opts___]:=ArrayPlot[MapThread[List,{Most[cainflist[Map[rules[[#]]&,ra,{2}],init]],ra},2],ColorRules->{{1,1}->Darker[Pink,.5],{0,1}->Lighter[Pink,.7],{1,2}->Darker[Yellow,.5],{0,2}->Lighter[Yellow,.7]}]
In[]:=
icaplotgenx[{rules_,ra_},init_,opts___]:=With[{arr=cainflist[Map[rules[[#]]&,ra,{2}],init]},ArrayPlot[Append[MapThread[List,{Most[arr],ra},2],{#,1}&/@Last[arr]],opts,ColorRules->{{1,1}->Darker[Pink,.5],{0,1}->Lighter[Pink,.7],{1,2}->Darker[Yellow,.5],{0,2}->Lighter[Yellow,.7]}]]
In[]:=
icagenfinal[{rules_,ra_},init_]:=cainf[Map[rules[[#]]&,ra,{2}],init]
In[]:=
randupdate[array_]:=MapAt[{2,1}[[#]]&,array,RandomInteger[{1,#}]&/@Dimensions[array]]
In[]:=
randupdate[array_,n_]:=MapAt[{2,1}[[#]]&,array,Table[RandomInteger[{1,#}]&/@Dimensions[array],n]]
In[]:=
NestList[First@TakeSmallestBy[Table[randupdate[#],100],Abs[Total[icagenfinal[{{170,240},#},CenterArray[Table[1,10],20]]]-10]&,1]&,RandomChoice[{1,2},{40,20}],10];
In[]:=
icaplotgen[{{170,240},#},CenterArray[Table[1,10],20]]&/@%9
Out[]=
,
,
,
,
,
,
,
,
,
,
In[]:=
NestList[First@TakeSmallestBy[Table[randupdate[#,40],100],Abs[Total[icagenfinal[{{170,240},#},CenterArray[Table[1,10],20]]]-10]&,1]&,RandomChoice[{1,2},{40,20}],10];
In[]:=
icaplotgen[{{170,240},#},CenterArray[Table[1,10],20]]&/@%
Out[]=
,
,
,
,
,
,
,
,
,
,
In[]:=
NestList[First@TakeSmallestBy[Table[randupdate[#,40],100],Abs[Total[icagenfinal[{{170,240},#},CenterArray[Table[1,10],20]]]-10]&,1]&,RandomChoice[{1,2},{40,20}],10];
In[]:=
icaplotgen[{{170,240},#},CenterArray[Table[1,10],20]]&/@%
Out[]=
,
,
,
,
,
,
,
,
,
,
Leave in the unmodified result
Leave in the unmodified result
In[]:=
NestList[First@TakeSmallestBy[Append[Table[randupdate[#,40],100],#],Abs[Total[icagenfinal[{{170,240},#},CenterArray[Table[1,10],20]]]-10]&,1]&,RandomChoice[{1,2},{40,20}],10];
In[]:=
icaplotgen[{{170,240},#},CenterArray[Table[1,10],20]]&/@%
Out[]=
,
,
,
,
,
,
,
,
,
,
In[]:=
NestList[First@TakeSmallestBy[Append[Table[randupdate[#,40],100],#],Total[Abs[icagenfinal[{{170,240},#},CenterArray[Table[1,10],20]]-CenterArray[Table[1,10],20]]]&,1]&,RandomChoice[{1,2},{40,20}],10];
In[]:=
icaplotgen[{{170,240},#},CenterArray[Table[1,10],20]]&/@%
Out[]=
,
,
,
,
,
,
,
,
,
,
[[[ the last line is missing ]]]
Older stuff
Older stuff
Batch size: compute the loss by looking at multiple configurations
Batch size: compute the loss by looking at multiple configurations
MNIST:
MNIST:
2D CA: e.g. Evolve the initial bit pattern to concentrate values in one region of the space
In practice: for non-spatial input/output, use a random Boolean network rather than a CA ... e.g. with two different functions that can be run at each node...
In practice: for non-spatial input/output, use a random Boolean network rather than a CA ... e.g. with two different functions that can be run at each node...
But we can assume that the underlying network can be fixed.
[Principle of Generic Trainability]
[Principle of Generic Trainability]
Any nontrivial system with enough degrees of freedom can be trained to approximate any function
Will only work for certain input-output pairs ... which are probably a small subset of all possibilities
Given a certain number of degrees of freedom, what functions can we learn?
Given a certain number of degrees of freedom, what functions can we learn?
What matters?
What matters?
How hard you bash it; overall geometry
What to compute
What to compute
Function with binary number as input, and binary number as output
[ What generalization can it do ? ]
Is it a computational irreducibility story that you reach any state of training?
Is it a computational irreducibility story that you reach any state of training?
Trainability requires lots of dofs ... but then they can often be pruned
Trainability requires lots of dofs ... but then they can often be pruned
Need something high-dimensional so you don’t get stuck.