Transitioning from SISO to MIMO
Mark Elo (email@example.com) is director, RF Products for Keithley Instruments (www.keithley.com). He joined the company in 2006 after working for Agilent Technologies in marketing and R&D management positions. Elo holds a bachelor's degree in engineering with honors from the University of Salford, Lancashire, England, and an MBA from Herriot Watt University in Edinburgh, Scotland.
In the beginning, life was simple, life was analog. One radio occupied a single piece of spectrum and the bandwidth of that spectrum was proportional to the amplitude or bandwidth of the information being transmitted. Then, some time in the last quarter of the twentieth century, commercial radio went digital. Information could be compressed into smaller bandwidths, slices of spectrum could be shared using time division multiple access schemes (TDMA), and seamless conversations could be held at speeds of roughly 70 kpm, and—if you were lucky—you could move from one cell in a network to another without dropping the call.
New challenges emerged for the designer: Engineers and standards bodies began talking about how the power behaves over the time of a burst or a slot, or spectral effects due to the transient nature of TDMA, compounded with effects of the chosen modulation scheme rapidly transitioning from one state to another became the new measures of quality. But at least standards such as GSM kept the amplitude of the modulation constant.
All was good. A cellular or mobile phone was used for voice-based communication and life soon became simple again—but with better capacity and a little more spectral efficiency.
Then the Internet came along. It offered an infinite pool of information—every question could be answered and people wanted more and more of it. The goal now was to deliver data as quickly and efficiently as possible, and voice was just another type of data. Suddenly, a phone was not just a phone and a computer was not just a computer; they were now called subscriber units or user appliances. Access to the new IP- (Internet protocol) based networks was now defined as fixed (i.e., the computer has a data cable), nomadic (i.e., the computer does not have a data cable but must be stationary) and roaming (you can have wireless broadband access to data while walking and driving around).
Today, we live in a hybrid environment oscillating between nomadic and mobile worlds. Technologies such as OFDM (Orthogonal Frequency Division Multiplex) and MIMO (Multiple-Input, Multiple-Output) have been successfully deployed to allow nomadic users high speed access to data. The next step, and the primary focus of standards such as WiMAX (802.16e) and LTE (3GPP Rel. 8), is to close the gap between nomadic access and true mobile access using the same fundamental building blocks—OFDM and MIMO.
The new challenges are significant. Let’s begin with a look at OFDM, which came into my life in the mid-nineties. Its first real commercial proponents were the European terrestrial digital TV community, which touted its immunity to multi-path and its promises of single frequency television networks that would free up much-needed bandwidth. An understanding of this technology fell into place once I understood that the objective was to slow the symbol rate while maintaining a high data rate by using inverse FFT to facilitate the transmission of slow symbols in parallel rather than fast symbols in serial. Of course, this process dramatically increases the amount of baseband processing required, not only because of the requirement to convert a serial data stream into a parallel set of symbols but because the signal’s composition is dynamic, depending on the condition of the channel. For example, if we assume the basic principle that “more is good,” we should aim to transmit all our symbols using 64 QAM. However, higher-order modulation types require a better carrier-to-noise ratio. If the channel is noisy or attenuated in some way due to multi-path or there is simply a large distance between transmitter and receiver, then significant symbol errors will occur. QPSK (Quadrature Phase Shift Keying)—primarily due to its simplicity with only four states—is much more robust when the carrier to noise is significantly degraded.
For both LTE and WiMAX, we can occupy bandwidths up to 20 MHz. Given that the actual transmission channel will not be flat across this bandwidth, both systems will assign groups of carriers with either QPSK, 16QAM, or 64QAM modulation schemes, depending on the frequency response of the channel. This combination of dynamically changing modulation types can also introduce a great deal of complexity into the analog portion of the radio design, especially when the engineer is making cost, power, and dynamic range tradeoffs with respect to the choice of amplifier. The peak to average ratio of a signal is very large in many cases, as the chance of the vector sum of a number of in-phase symbols is significant. AM to AM and AM to PM effects must be understood to be able to characterize the performance of the amplifier correctly. The large peak to average ratio is a tough problem to solve for the handset designer as power consumption and cost are critical factors in the design. LTE attempts to solve the high peak to average ratio problem by employing a different flavor of OFDM in the up-link called SC-FDMA (Single Carrier Frequency Domain Access). The signal is still essentially an OFDM signal, but I like to think of it as groups of subchannels transposed from the frequency domain to the time domain. In essence, five OFDM subcarriers become one OFDM subcarrier with five times the original bandwidth; within one OFDM symbol period, you have the information that represents the symbols of the original five subcarriers. This method reduces the peak to average ratio of the signal, thus simplifying the choice of amplifier in terms of cost and power. However, the tradeoff is that you’ve increased the baseband processing significantly, so any power/cost saving maybe taken up by the larger baseband device required.
MIMO at first seems daunting. However, it becomes an easy concept to understand once you get past the matrix math and figure out that to improve the carrier-to-noise ratio of a transmission, all that needs to be done is to transmit more of the same signal (duplicate the signal) onto multiple carriers or, to improve throughput, to transmit more data onto multiple carriers. If all the signals are transmitted at the same frequency and occupy the same bandwidth, there will also be some spectral efficiency. To do this effectively, use multiple transmitters and receivers; then, build a model of the channel by transmitting a known signal, subtract out the channel effects, and resolve for the originally transmitted streams and symbols. Voila!
The process or method of transmitting more of the same signal in MIMO (i.e., a more robust signal) is called Space Time Block coding. Algorithmically, each stream is encoded orthogonally, which helps the receiver to distinguish between the multiple signals or streams from the get-go.
To improve capacity or throughput, Spatial Multiplexing can be employed. This means we are transmitting multiple sets of data on each stream, thus improving the symbol throughput.
The tradeoff for the system designer is choosing the transmission technique when faced with ever-changing channel conditions. If the carrier-to-noise ratio is low, then the data rate is slowed by employing Space Time Block Coding; if the carrier-to-noise ratio is high, then throughput is significantly increased by using Spatial Multiplexing. Also, don’t forget we are still using OFDM, so groups of subcarriers or subchannels will be assigned different modulation types and the type of transmission will be based on the carrier-to-noise ratio. As you can see, the complexity of the signal and its dynamics can become overwhelming.
Collaborative modes of operation can also be employed. This means the transmitter is operating in a SISO mode (i.e., only using a “Single Output”), but it can use the same frequency spectrum as another transmitter, provided that the receiver is a MIMO device. WiMAX makes provision for this in the up-link and uses it as a technique for improving up-link capacity.
So far, we’ve only talked about open-loop MIMO systems. Closed-loop MIMO systems rely on a feedback loop to improve the signal quality further, which is especially useful in high interference environments. In this case, the improvement is created by weighting the phase and amplitude on each antenna and creating a beam of RF energy. The more antennas you have, the more accurate the beam becomes. To determine how the beam is formed, the channel is “sounded” (i.e., a known signal is transmitted that allows the receiver to build a model of what the channel looks like at a specific instant in time). The channel response is fed back to the transmitter, usually by using a lookup table so the uplink capacity isn’t completely consumed by the channel model. The transmitter then uses the model to weight its transmitters appropriately.
For the design engineer, the new challenge is all about packing as many radios into the smallest space possible and spacing antennas appropriately while, at the same time, optimizing for power consumption and avoiding a parts count that makes the device prohibitively expensive with gross reliability problems. Baseband systems are also becoming more and more complex with blurred lines between the MAC layer and the PHY layer of the architecture. There is a lot more to design, observe, and measure, from the dynamically changing modulation of the OFDM signal transmitted through to multiple antennas using different schemes, all based on the performance of the channel. Life may be hard, but at least it’s extremely spectrally efficient.