MAM2046W - Second year nonlinear dynamics
MAM2046W - Second year nonlinear dynamics
Section 2.6: Weakly nonlinear oscillators
Section 2.6: Weakly nonlinear oscillators
Not the Van der Pol oscillator again, I hear you cry!! Yep, I’m afraid so...but this time in a different limit. Not the limit of very strong nonlinearities, but the opposite limit, where it’s very close to linear:
x
2
x
x
where now ϵ is going to be treated as a small value.
In fact we are going to be studying more general systems than this. We are going to me looking at systems of the form
x
x
where is some smooth function, and .
h(x,)
x
0<=ϵ<<1
We’re going to ask, in the same way that we did when this term was very small, what can we learn about such a system with this approximation?
In fact the answer seems kind of obvious...I hope. For the Van der Pol oscillator, if we treat the term proportional to ϵ as being small, and we start at some small value of to be approximately moving according to
x,thenwearegoing
x
which gives circular motion, but we will have a term of the form:
x
x
which says that we will have an additional contribution of the acceleration in the direction of motion, which drives us slightly out from circular motion. As we get closer and closer to even smaller, and so we get a better and better approximation to
x=1,thissmalladditionaltermbecomes
x
as we approach . For μ=0.1, we have:
x=1
μ=0.1;eqns={x'[t]==y[t],y'[t]==-x[t]-μ(-1)y[t]};startpoints={{0.1,0.1}};({x[t],y[t]}/.NDSolve[{eqns[[1]],eqns[[2]],x[0]==#[[1]],y[0]==#[[2]]},{x,y},{t,0,100}][[1]])&/@startpoints;Show[VectorPlot[{eqns[[1,2]],eqns[[2,2]]},{x[t],-4,4},{y[t],-4,4}],ParametricPlot[%,{t,0,100},PlotStyle->{{Red,Thick},{Red,Thick},{Red,Thick},{Red,Thick},{Blue,Thick},{Blue,Thick},{Blue,Thick},{Blue,Thick}}],Graphics[Arrow[{#/.t->5,#/.t->5.01}]]&/@%,ImageSize->700]
2
x[t]
Out[]=
We’ve gone from the complicated motion of the Van Der Pol oscillator, to a pretty simple version which is more or less circular motion with a slight movement out for and a slight movement in for .
x<1
x>1
The same thing happens with the so-called Duffing equation, given by:
x
x
3
x
This is of course a three dimensional system, but taking , we have:
α=1,δ=γ=0
x
3
x
which for look like:
β=0.1ontheleftandβ=2ontheright
In[]:=
β=0.1;eqns={x'[t]==y[t],y'[t]==-x[t]-β};startpoints={{1.5,1.5}};({x[t],y[t]}/.NDSolve[{eqns[[1]],eqns[[2]],x[0]==#[[1]],y[0]==#[[2]]},{x,y},{t,0,1000}][[1]])&/@startpoints;pl1=Show[VectorPlot[{eqns[[1,2]],eqns[[2,2]]},{x[t],-4,4},{y[t],-4,4}],ParametricPlot[%,{t,0,100},PlotStyle->{{Red,Thick},{Red,Thick},{Red,Thick},{Red,Thick},{Blue,Thick},{Blue,Thick},{Blue,Thick},{Blue,Thick}}],Graphics[Arrow[{#/.t->5,#/.t->5.01}]]&/@%,ImageSize->700];β=2;eqns={x'[t]==y[t],y'[t]==-x[t]-β};({x[t],y[t]}/.NDSolve[{eqns[[1]],eqns[[2]],x[0]==#[[1]],y[0]==#[[2]]},{x,y},{t,0,1000}][[1]])&/@startpoints;pl2=Show[VectorPlot[{eqns[[1,2]],eqns[[2,2]]},{x[t],-4,4},{y[t],-4,4}],ParametricPlot[%,{t,0,100},PlotStyle->{{Red,Thick},{Red,Thick},{Red,Thick},{Red,Thick},{Blue,Thick},{Blue,Thick},{Blue,Thick},{Blue,Thick}}],Graphics[Arrow[{#/.t->5,#/.t->5.01}]]&/@%,ImageSize->700];GraphicsGrid[{{pl1,pl2}}]
3
x[t]
3
x[t]
Out[]=
The Duffing equation is itself very interesting, and we may discuss it more later.
Exercise: By taking the Duffing equation and multiplying by , and integrating, find a conserved quantity. Given that there is a fixed point at , what does this tell you? Plot the energy surface.
x
(x,)=(0,0)
x
Anyway, we’re going on tangents here, let’s get back to business...
As you get back to business, make sure that you have a pen and a pad of paper with you. Write all of what follows down and make sure that you can understand every step of the mathematics!
So, given an equation of the form:
x
x
You might think that you could perform a perturbative analysis. This is often the case when you have a small parameter in the system. You set up the solution as a series expansion in the small parameter, plug in the series to the equation and solve order by order. Seems not implausible, so let’s give it a go.
Let’s look at the particular example:
x
x
A series solution would look like:
x(t)=(t)+ϵ(t)+(t)+...
x
0
x
1
2
ϵ
x
2
Sometimes you will see the left hand side written as will miss out the ϵ here. Let’s see if we can find these different (t) for a given initial condition:
x(t,ϵ),butwe
x
k
x(0)=0,(0)=1
x
This also means that we can set (0)=0,'(0)=1 and all other (0),'(0)=0. Why? Because the ϵ should not affect the first order term, so it can’t be ϵ dependent, therefore it must give us the initial condition. Note that we’re using ‘ rather than dots here for the time derivative. It’s just a notation convention, but you can use either.
x
0
x
0
x
k
x
k
Actually, the equation can be solved exactly to get:
But here we want to presume that we don’t know the analytical solution and find an approximation. Plugging in our perturbative solution to the equation we get:
The first equation has solution
hmmm...but this is a resonant driving force. We can solve this and it gives us:
because just going up to this second term our series expansion now looks like:
Where the red is the true solution, and the blue is the series solution, which is a reasonable approximation up to around t=4, and then after that it diverges a lot.
In fact we see that while the fast oscillations seem to be in ‘rhythm’ between the two functions, over the longer timescale, the true solution exponentially converges, whereas the series solution diverges.
We say that the short timescale behaviours (the oscillations) agree, but the longer timescale behaviours disagree. The reason for this is quite simple. If we take the true solution, it is given by:
On the other hand, the timescale of the incorrect blowup is O(1).
If we perform a series expansion of this last part it is given by:
So can we still use a perturbative method? It turns out that we can, but we have to do a bit of a change of variables.
Now the total derivative in t gives us:
Plugging the series expansion into the equation using the new variables gives:
Taking the zeroth order terms we have:
Which has solutions
and now we can plug our solution above in to get:
which we can rewrite as:
How could we ensure that we don’t have a driving force here? Well, we can do this by setting:
which means that:
Why did we say that there can't be a driving force? This is a bit subtle. If there was a driving force with timescale τ, we would end up with something where the τ would be not just the short-term time-scale but also the long-term timescale, which we know can’t be the case. The long-term dynamics is driven by T, not τ, so we can’t have a driving force which makes the system explode on short time-scales.
We have to be a little careful to find our constant to match the initial conditions. Our initial conditions are:
The first of these says that:
We want to solve this for general small ϵ, which means that we must have
For the second condition, we have to be a bit careful and write
which has zeroth order term:
The reason we had to be careful here was because now there are two time-variables, so we have to be sure of what the derivative condition is with respect to.
Applying these two conditions gives:
Exercise: Now do the same thing with the Van der Pol oscillator.
In general...
Can we now come up with some methods based on the above “two-timing” method that work for general weakly nonlinear oscillations? Let’s see...
We are looking at:
which we can write more compactly as:
subbing in the series expansion
again we take order by order to give:
we can write the solution to the first equation as the sin+cos combination, or we can write it as
(show that they are equivalent)
plugging this into the second equation we have:
which is:
We don’t want any driving terms on the right hand side, so there can’t be any terms proportional to sin(τ+ϕ(T)) or cos(τ+ϕ(T)).
For the first two terms that seems pretty easy...but for the h term...how do we pull out the terms proportional to sin and cos? Let’s just simplify notation a little bit here first. We’ll remember all the functional dependencies and write the right hand side as:
and in fact we can also let τ+ϕ = θ to give
To calculate the Fourier coefficients, you need to integrate h:
This looks horrendous, but in fact when we look at a concrete example it will all become clear.
Now looking at the right hand side again we have:
which must vanish, so we must have
Which says that:
which is enough information to give us our unknown functions which can then be plugged back into
Let’s try this for the Van der Pol oscillator:
So plugging this into the above:
Taking the first of these we can write the integral as:
These integrals of the form:
So we have
and for the second equation we have:
and for the second
so squaring each of these equations and adding them gives us:
meaning that
taking the positive root. This fixes:
and finally
which because we know that ϕ is constant, means that it’s identically zero. Now, the overall solution is:
and so:
converting back into t and ϵ we have:
The point is that if we have a system of the form
then the period of the system
On the other hand if you force it with twice the frequency, then half the time you will be pushing in the direction that it’s going, and half the time you will be pushing it against its motion. Looking at plots of
we see:
where the blue line is being driven at the natural frequency of the system, and the red line is being driven at twice the natural frequency and is thus only perturbed slightly away from the natural rhythm (in green).