The theory of chaos is an extraordinarily broad mathematical topic, and we all have some intuition for what it means when a system is *chaotic*. The ideas of unpredictability, spontaneity, intractability, turbulence, and perhaps randomness, all come to mind. But deterministic chaos is somewhat different than our intuitions would have us believe. If you watch the shape of a flickering flame, the whitewater in a rocky river, or the price of crude oil in North America, you’re definitely seeing behavior which can’t be described without chaos. But you’re also most likely seeing the effects of any number of *random* influences on the system, whether a faltering breeze or some oil speculator’s whimsy. Chaos theory deals with the behavior of *deterministic* systems—that is, systems with no random inputs. All the intricacy and intrigue of chaotic behavior can arise in systems which might seem deceptively uncomplicated, like a pendulum hanging from another pendulum, or three stars orbiting each other.

But if you’ve ever heard of the “butterfly effect” (a term coined by a pioneer of chaos theory, Edward Lorenz), it’s likely your intuition is right about the central feature of deterministic chaos: **chaotic systems have high sensitivity to initial conditions. **

If chaotic systems are so unpredictable and temperamental, how can we possibly make chaos work for us? One answer is encryption.

There are lots of ways to encrypt messages for secure transmission, but the idea of using chaos in encryption is pretty new. One shocking type of chaotic encryption was invented in 1993 by Kevin Cuomo of MIT, who (along with Alan Oppenheim) published a paper [1] outlining a new method for using chaos to send private messages. To understand how Cuomo’s chaotic encryption can work, you first have to believe in a surprising phenomenon called *synchronized chaos.*

Cuomo’s method relies on this synchronized chaos, a somewhat mysterious discovery summarized in a 1990 paper [2] by Louis Pecora and Thomas Carroll at The Naval Research Laboratory. The phenomenon occurs in some situations when part of the *output* of one chaotic system is used as an *input* for a twin chaotic system. If the two systems are properly synchronized, then the second system will mimic the behavior of the first with uncanny fidelity.

Just like Pecora, Carroll, Cuomo, and Oppenheim have done, we’ll look at synchronization in the chaotic Lorenz system (look at my post Edward Lorenz’s Strange Attraction for a Deeper Dive into the Lorenz system). The system comes from Edward Lorenz’s simplification of an atmospheric convection model, and its intriguing chaotic behavior has been studied for decades. It is defined by the following system of nonlinear differential equations:

where *σ*, *r*, and *b *are positive parameters related to the physics of convection. We’ll use *σ* = 10, *r *= 28, and *b *= 8/3, since those are the values Lorenz originally used to study the system. The variables *x*, *y*, and *z* make up the state of the system at each instant in time, so think of them as coordinates in state-space. Solutions to the Lorenz equations always outline a chaotic *attractor*, shown in Figure 1.

**Figure 1. **A solution to the Lorenz equations with initial conditions (x, y, z) = (0, 1, 0), found by numerical integration. The solutions (called *trajectories*) alternate irregularly between the “wings” of the attractor without ever intersecting themselves or one another.

Pecora and Carroll built an electrical circuit to model the behavior of the Lorenz system. I’ll call that circuit the Talker (T). The circuit T is designed to generate its output voltage signals (X_{T}, Y_{T}, Z_{T}) according to the Lorenz equations:

So as we would expect, measuring the voltages (X_{T}, Y_{T}, Z_{T}) shows the chaotic signature of the Lorenz system. Next, an almost identical circuit is built as a second chaotic system, called Copycat (C), with output signals (X_{C}, Y_{C}, Z_{C}). But there’s one key difference between the Talker and the Copycat: in the circuit C, the output X_{C} is snipped and replaced with Talker’s signal X_{T} where it feeds into the parts that generate Y_{C} and Z_{C}. The resulting situation is shown in the Figure.

**Figure 2. **The Copycat circuit may be synchronized with the Talker circuit by feeding X_{T} into the components of the Copycat which generate Y_{C} and Z_{C}. See Figures 3 and 5 in Ref. [1] for more detail.

The effect of feeding the signal X_{T} into places where the the Copycat “expects” to receive the signal X_{C} is to alter the Copycat’s governing equations:

Notice that these equations which determine the state of the Copycat, (X_{C}, Y_{C}, Z_{C}), now depend on X_{T} coming from the Talker. In this situation T and C are said to be *synchronized*, and the outputs (X_{C}, Y_{C}, Z_{C}) are approximations of the outputs (X_{T}, Y_{T}, Z_{T}). The agreement between T and C is clear from a plot of the magnitude of the Copycat’s error , as in the Figure below.

**Figure 3. **The magnitude of the difference between the state of C and T as a function of time. The initial separation was chosen as (clipped from the Figure), and the error quickly drops to hover around 0.05. The distance between the trajectories of C and T is tiny compared to the attractor they outline—about 10^{4} times smaller—so the synchronization is working extremely well.

The Copycat circuit is receiving only **partial** information about the state of the Talker circuit, but **all** of its outputs will synchronize with the Talker circuit outputs. Take a moment to think about that. It’s almost as if C has total knowledge of the state of T *and* is able to copy it almost exactly—despite being a chaotic system itself!

###### ——

Things get even stranger, and this is where the possibility for chaotic encryption comes in. Cuomo realized the potential for synchronization even when a signal *other* than X_{T} is fed into the Copycat circuit.

Since we’re now talking about encrypted communication, we’ll again consider two identically prepared systems which are governed by the Lorenz equations. They’re called Talker (T), which has outputs (X_{T}, Y_{T}, Z_{T}), and Receiver (R), which has outputs (X_{R}, Y_{R}, Z_{R}). Now imagine you have some message *m(t)* that you want to encrypt and send securely—we’ll use the example of the audio clip below.

This audio signal is our message *m(t)*. In our case, the Talker and Receiver will be computer simulations which solve the Lorenz equations numerically, instead of the electrical circuits used by the inventors of this encryption method.

Now we construct a new signal *s(t)* = *m(t)* + X_{T} by adding the message to an output of Talker, being careful to scale *m(t)* and X_{T} so that *m(t)* is much smaller than X_{T} on average. The result is some unrecognizable junk, since the chaotic signal X_{T} drowns out the message. Take a listen:

Now for the prestige: if you feed the junk *s*(t) into the Receiver exactly as X_{T} was fed into the Copycat above, the synchronization **still works****. **That’s almost absurd—the Receiver isn’t even being fed an output of the Talker anymore, but rather some junk signal that contains both X_{T} and our message, yet it still recovers the state of the Talker. The Receiver is now governed by the equations:

The outputs (X_{R}, Y_{R}, Z_{R}) of the Receiver circuit are approximations of the outputs (X_{T}, Y_{T}, Z_{T}) of the Talker circuit, so X_{R} ≈ X_{T}. And remember that *m(t)* = *s(t)* – X_{T} from when we constructed *s(t)*. So by replacing X_{T} with its approximation X_{R} in the equation above, we can create a reconstruction of the original message: = *s(t)* – X_{R} ≈ *m(t)*. Have a listen to our final reconstructed audio signal.

The Figure below compares the original audio signal to the reconstructed signal.

**Figure 4.** The message *m(t)* is shown in red, along with the reconstructed message in blue. The reconstructed message is slightly noisier than the original, but completely recognizable.

Let’s summarize:

We started with two identical Lorenz systems and a message that we wanted to encrypt. We added some chaotic noise to the message to produce an encrypted signal for transmission. There’s no fear of an interceptor deciphering our signal; the chaotic system R itself is the key to decoding the message. We used the transmitted noisy signal as an input to part of the receiving system, and then used part of the receiver’s output to reconstruct the original message. Presto!

How important is it that T and R be synchronized systems, anyway? Could the reconstruction still work if they were slightly different? Well, let’s see what happens when we change just one of the Lorenz parameters in the Receiver system by 5%. With the same message signal, we get this output:

The result is totally obscure, although if you listen closely you can hear some of a beat from the original music. As the systems become more poorly synchronized, the fidelity of the reconstructed message drops extremely rapidly. Keep in mind that even when T and R are operating with identical parameters, synchronized as well as possible, they cannot be *perfectly *synchronized. The output of R will never exactly mimic the output of T, since R is still operating under different governing equations than T is.

So the method works well with a musical sample, but let’s see how well it works with a sample of dialogue. After all, it’s probably more realistic that your secret messages will be spoken rather than strummed. Let’s use the following audio sample.

The chaotic encrypted signal is:

After decrypting the message, we hear:

The result is completely comprehensible.

You can implement this method yourself without too much trouble, in case you want to send private messages to a friend. Start with simulating the Lorenz system by solving the equations numerically. Then convert your private message into any time-series, like the audio signal I used above, and add the chaotic noise from your simulation to the message. In order to decrypt the message, your friend will need to solve the receiver equations for (X_{R}, Y_{R}, Z_{R}) numerically with the encrypted signal as an input. That’s it—as long as you agree on the Lorenz parameters *σ*, *r*, and *b* ahead of time, it just takes two numerical solvers for you to communicate privately with chaos.

The possibilities for this synchronization-based chaotic encryption go far beyond what I’ve demonstrated here. The Lorenz system itself arises from certain laser systems, allowing for some optical communications to be encrypted with chaos [3]. Image encryption is also a relatively straightforward application of the method when pixel intensities are used as the message *m(t)* [4]. Further, the synchronization phenomenon has been observed in other chaotic systems, including *discrete* systems which are defined by iterative maps instead of differential equations. There are even some remarkable uses of synchronized chaos in neurophysiology, drawing analogies between chaotic systems’ “knowledge” of surrounding systems and models of the chaotic interdependent behavior in real neural networks. The strangeness of chaos can definitely work in our favor, as long as we’re careful with it.

**References and Further Reading**

[0] Edward Lorenz’s Strange Attraction.

[1] Cuomo, Kevin M.; Oppenheim, Alan V. “Circuit Implementation of Synchronized Chaos with Applications to Communications.” Phys. Rev. Lett. V. 71, No. 1, pp. 65-68. July 1993.

[2] Pecora, Louis M.; Carroll, Thomas L. “Synchronization in Chaotic Systems.” Phys. Rev. Lett. V. 64, No. 8, pp. 821-824. February 1990.

[3] Mirasso, Claudio R.; Colet, Pere; García Fernández, Priscila. “Synchronization of Chaotic Semiconductor Lasers: Application to Encoded Communications.” IEEE Photonics Technology Letters V. 8, No. 2, pp. 299-301. February 1996.

[4] Al-Maadeed, Somaya; Al-Ali, Afnan; Abdalla, Turki. “A New Chaos-Based Image-Encryption and Compression Algorithm.” Journal of Electrical and Computer Engineering Vol. 2012. January 2012.

[5] Strogatz, Stephen H. *Nonlinear Dynamics and Chaos*. 1994.

[6] Lorenz, Edward N. “Deterministic Nonperiodic Flow.” Journal of The Atmospheric Sciences V. 20, pp. 130-141. March 1963.

[7] Greene, Kate. “Encryption Using Chaos.” MIT Technology Review. January 2006.

[8] Skarda, Christine A.; Freeman, Walter J. “How brains make chaos in order to make sense of the world.” Behavioral and Brain Sciences V. 10, pp. 161-195. 1987.

Excellent post! One question: how finely do you sample ? I can imagine that with the wrong level of sampling you would get a poor encryption: sample too finely and the values don’t change quickly enough, or sample at a large but unlucky period and the values will coincidentally be close together (say, due to the nature of the attractor). Did you experience this at all in your tests?

Thank you! And that’s a good question, because choosing a good sampling rate was certainly a problem in the tests. For good reconstruction, the frequencies in

m(t)should be high compared to the frequencies in the Lorenz solutions. So there are two sampling rates that matter: the first is the rate which the messagem(t)is sampled (I used 44.1kHz for digital audio), which (along with the duration of the message) fixes the length ofm(t). Now the issue becomes generating a sample of a Lorenz solution which is the same length asm(t). Because the attractor is chaotic, sampling too slowly (and not even just at unlucky periods) will return values which have jumped all around the attractor—and as a result very high frequencies will show up in the spectrum. That makes reconstruction impossible. In practice this became a problem when I sampled below about 300′Hz’ (‘Hz’ is in quotes because it’s not referring to a real-life time domain, but rather to a the Lorenz-system -domain). Likewise, like you said, sampling too quickly doesn’t allow the solution to vary enough for great encryption, and reconstruction is very easy. That doesn’t become a noticeable problem until you’re sampling at much higher frequencies, well above 100′kHz’. I used 500′Hz’ for the figures in the post, but values up to 100′kHz’ worked just as well. Hope that helps!> For good reconstruction, the frequencies in m(t) should be high compared to the frequencies in the Lorenz solutions.

Does that mean that a simple high-pass filter of the cryptotext would reconstruct the plaintext? That would be too easy, so I’m probably misunderstanding something.

I’m glad you asked this, because I should really clarify. Basically, you’re right. I accidentally missed an important point in my previous comment when I said “for good reconstruction, the frequencies in m(t) should be high compared to the frequencies in the Lorenz solutions.” What would have been better to say is “for the easiest possible reconstruction, the frequencies in m(t) should be high compared to the frequencies in the Lorenz solutions. However, this leaves the cryptotext vulnerable to high-pass filtering.” Yes, the reconstruction is easy, but it’s also easy for the interceptor!

The truth is that the frequencies in the message should be higher than the dominant (very low) frequencies in the Lorenz solutions, but not so high as to be identifiable as completely above the chaotic frequencies. In practice this balance is not hard to strike, mainly because the message is intentionally very weak compared to the chaotic noise. Not to mention that we have lots of freedom in how often we sample the Lorenz solutions. And for messages like music or speech, the frequency range is wide enough that high-pass filtering is likely to damage the message hidden in the noise.