WOLFRAM NOTEBOOK

{{1}->{1,0,1},{1,1}->{1,0,0,1,0,0,1},{1,0,1}->{1,1,1}}

Learning x
f[x] for various xi

How is the computation done?

Feed xi as the input to a CA; read the state of the CA after a specified number of steps n.
In[]:=
Table[ArrayPlot[CellularAutomaton[132,{Table[1,n],0},10],Mesh->True],{n,10}]
Out[]=
,
,
,
,
,
,
,
,
,
f[{1,1,1}]->{0,1,0}
[ Possibly use fixed width ]

Cross-entropy loss

Loss : across all xi how many got the right answer

Hamming loss

In[]:=
Table[With[{u=RandomInteger[1,15]},u->Table[Boole[Total[u]>7],15]],10]
Out[]=
{{1,1,0,0,0,0,0,0,0,0,1,1,1,1,0}{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0},{1,1,0,0,0,1,0,1,0,0,0,1,0,0,1}{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0},{0,0,1,0,0,0,0,1,0,0,0,1,0,1,0}{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0},{0,0,0,0,0,0,0,1,1,1,0,0,0,1,1}{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0},{1,1,0,0,1,1,1,0,0,0,1,1,1,0,1}{1,1,1,1,1,1,1,1,1,1,1,1,1,1,1},{0,0,0,1,1,1,0,0,0,1,1,1,0,1,0}{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0},{1,1,0,0,0,1,0,0,1,1,0,0,1,1,0}{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0},{0,1,1,1,1,1,1,0,1,0,1,1,0,0,0}{1,1,1,1,1,1,1,1,1,1,1,1,1,1,1},{1,0,1,0,0,1,1,0,1,0,0,0,0,1,0}{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0},{1,0,0,1,1,0,1,1,1,1,0,1,0,0,0}{1,1,1,1,1,1,1,1,1,1,1,1,1,1,1}}

Two approaches:

Pure recurrent net : a single CA rule that’s applied everywhere (CA case)

Pure feed forward net : pick a different CA rule (probably from a small set) to apply at each “spacetime point” (ICA case)

[ Additional piece: CA on a more complicated graph ; e.g. a complete graph ]

[ Comparison : try to have an actual neural net learn the bit bit transformations ]

What about reversible CA rules?

Say we have two reversible rules

What about a Boolean-ified ordinary neural net?

Effectively the rules will then be CA rules

Majority Function

In[]:=
training=Table[With[{u=RandomInteger[1,15]},u->Table[Boole[Total[u]>7],15]],10]
Out[]=
{{1,0,1,0,0,0,0,0,0,1,0,1,0,1,1}{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0},{0,0,1,0,1,0,0,0,0,0,1,0,1,1,1}{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0},{0,1,0,0,1,1,0,0,1,1,1,1,0,0,1}{1,1,1,1,1,1,1,1,1,1,1,1,1,1,1},{0,1,1,1,1,1,1,0,1,1,0,1,1,0,0}{1,1,1,1,1,1,1,1,1,1,1,1,1,1,1},{1,1,1,0,1,1,0,0,0,0,1,0,1,0,1}{1,1,1,1,1,1,1,1,1,1,1,1,1,1,1},{0,1,1,1,0,0,0,0,0,1,0,0,1,1,0}{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0},{0,1,1,1,0,1,1,1,1,1,1,0,1,1,0}{1,1,1,1,1,1,1,1,1,1,1,1,1,1,1},{1,1,1,0,0,0,0,0,1,1,0,0,0,0,1}{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0},{1,0,0,1,0,0,0,0,1,0,0,0,1,1,1}{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0},{1,0,0,1,0,0,0,1,1,1,1,0,0,1,1}{1,1,1,1,1,1,1,1,1,1,1,1,1,1,1}}
In[]:=
Module{max=10000,ru,result,evo,data,tot},SeedRandom[426778];evo=NestListCompoundExpressionru=
[]
RandomRuleMutation
[First[#]],If[(tot=Total[HammingDistance[First[CellularAutomaton[ru,First[#],{{20}}]],Last[#]]&/@training])<=Last[#],{ru,tot},#]&,{{0,2,3},Infinity},max;evo=Rest[First/@SplitBy[evo,Last]]
Out[]=
{{{2361183241434822606848,2,3},75},{{32566627426850173833325351862234333184,2,3},74},{{32608165801718301338626144004221255680,2,3},73},{{32608165801698958525512309946015891456,2,3},70},{{32608165801698958525512309946015895552,2,3},68},{{37925158914477036623791223769655242752,2,3},65},{{209395569103080739833636307903631683648,2,3},64},{{209395569098225693742027928890750627904,2,3},61},{{188127921165671876664075802776566853696,2,3},58},{{188127941448081480315755233923072880704,2,3},56},{{188294054699648037549261252244958965824,2,3},55},{{188294054382735387492203901870783426624,2,3},52},{{181564837654054908081737923195251814464,2,3},50},{{11340577443849194665857857452423606336,2,3},46},{{181647914403752855255430468917243768896,2,3},40},{{181647914408743301041105501159644557376,2,3},36},{{181647914418646821355388543358837551168,2,3},35},{{181647911883345620898929740365431140416,2,3},33},{{186964823866485284391121429496294895680,2,3},29},{{16823640406016052659434125780419178560,2,3},28}}
In[]:=
ArrayPlot[Append[CellularAutomaton[{16823640406016052659434125780419178560,2,3},First[#],20],Last[#]/.{1->Red,0->Lighter[Pink,.7]}]]&/@training
Out[]=
,
,
,
,
,
,
,
,
,
In[]:=
ArrayPlot[Append[CellularAutomaton[{32608165801718301338626144004221255680,2,3},First[#],20],Last[#]/.{1->Red,0->Lighter[Pink,.7]}]]&/@training
Out[]=
,
,
,
,
,
,
,
,
,
Classic case of overfitting.....
Try 2-step mutation....
Probably this is too high a learning rate.....
Wolfram Cloud

You are using a browser not supported by the Wolfram Cloud

Supported browsers include recent versions of Chrome, Edge, Firefox and Safari.


I understand and wish to continue anyway »

You are using a browser not supported by the Wolfram Cloud. Supported browsers include recent versions of Chrome, Edge, Firefox and Safari.