Advanced Search >>

01/06/2011

Consider the costs of not migrating your aging test system

Test system migration and modernization doesn’t have to be expensive and fraught with hassle. In fact, carefully planned migration can maximize test-system efficiency, performance and readiness while providing meaningful cost savings.

In aerospace and defense, missions evolve but one thing stays the same: the need to ensure system readiness. As system technology becomes more complex, it becomes more difficult to meet this challenge.

Carefully planned instrumentation migration and modernization can maximize test-system efficiency, performance and readiness— and provide meaningful cost savings. There are certainly risks associated with test program set (TPS) migration and modernization. The decision to forego modernization also carries a variety of risks.

You can mitigate the risks of migration by applying the success factors presented here: an assessment of the barriers and benefits, a comprehensive equipment inventory, a well-timed plan, an awareness of business cycles, and the ability (and willingness) to drive change. To help minimize the potential disruptions associated with technology refresh, you can also choose to work with an experienced test-and-measurement partner. Ultimately, the decision to replace outdated technology can provide meaningful cost reductions in areas such as system throughput, the number of test sets, and calibration, maintenance and repair.

The Real Power of Microwave CAE Software

 
August 2008

 
Rob2


Rob Lefebvre is the Manager of the Advanced Design System schematic and layout platform from Agilent Technologies Inc.

While user interfaces in microwave CAE software have remained relatively stable, important strides have been made in simulation technology. Today, more designs incorporate integrated chip, package, module and board systems which are more difficult to design, analyze, debug and deliver in high-volume than the typical design of 10 years ago. During a CAE seminar in the late 1990’s, I asked a large group of RF and microwave engineers about the software that they actually use for design.

Overwhelmingly, these engineers responded that they use schematic entry and linear simulation but little else. When asked specifically about nonlinear and electromagnetic simulation, only about ten percent were actually using these tools. The rest of the engineers described these simulators as expert tools that were difficult to use. Probing more deeply, I discovered that many of these same engineers were having problems with excessive design turns, even though simulation technology which would have prevented these turns was available to them.

What has changed over the past ten years in microwave CAE software? All of the vendors, Agilent included, regularly work on improving usability, each in its own way. Vendors and customers will argue over which is better, but all have made progress. Although these advances have given us usability improvements, we have not seen the kinds of revolutionary improvements enjoyed by other software industries. We have all witnessed attempts by smaller companies to bring usability to the forefront [1] [2] [3]. We have seen their products settle into mature software platforms, complete with their own legacy issues.

During this same 10 years, Linux has gone from an obscure, text only operating system to a major platform, complete with a modern user interface. Windows has proceeded through Windows 98, NT, 2000, ME, XP, and Vista. Microsoft Office 97 (7.0/8.0) was brand new and was the first Office version to not require floppy disk installation. Virtually all web-enabled applications have been completely developed in this time. In contrast, RF and microwave software, by and large, looks like it did at the turn of the millennium.

The Case for Change

While user interfaces in microwave CAE software have remained relatively stable, important strides have been made in simulation technology. Today, more designs incorporate integrated chip, package, module and board systems which are more difficult to design, analyze, debug and deliver in high-volume than the typical design of 10 years ago. At the same time, due market & economic pressures, designers are expected to deliver product with first-pass success in less time.

In recent visits to companies facing these challenges, I regularly see the elaborate tool design flows and processes these companies have established to try to keep ahead of these pressures. Even though these customers have access to tools from all major vendors that contain all the simulation power they need to help them achieve first-pass success, they still resort to error-prone manual processes for much of their flow.

One challenge that many designers face is in the area of signal integrity. Engineers need to analyze complex signal paths including such features as chip landings, bond wires, solder balls and traces. Electromagnetic simulators have made dramatic increases in capacity and performance and are now capable of handling immensely complex circuits, yet these simulators are underutilized. Often the EM analysis is performed by a senior EM modeling expert. This type of work flow can make it ineffective for circuit designers to combine the expert’s electromagnetic results with their existing circuit simulations to characterize a complete system.

An even bigger challenge is multi-chip and multi-module design. Designers want to bring together IC’s using different technologies onto a single module. However, a lack of interoperability between products creates a gap: Chip, module, and interconnect layouts are often completed using different CAE tools, all with their own database. Even when a single environment is used, the best platforms still require some manual processes to combine electromagnetic, circuit, and system data into a meaningful complete simulation when analyzing complex, multi-technology designs.

Microwave CAE vendors are all working provide interface and interoperability features that match the sophistication of today’s simulators. Foundries are beginning to embrace standards [4]. Most importantly, engineers are excited about the extra power that these new interfaces and standards will bring to their design flow.

The Future of Microwave CAE Software

CAE software has always followed the lead of the general consumer software market for usability improvements. The current generation of consumer software is adaptive, uses sophisticated visual effects, and aims for zero learning curve. Software giants have long realized that the key to their growth is expanding the size of the market by making their software usable by everyone.

One of the latest trends in user interface design is adaptability. Today’s consumer software interfaces anticipate what users might want to do and present options automatically. Some software even goes further and actually performs actions that it “thinks” the users will want to do. Automatic spelling or grammar correction is a simple example. By combining adaptability with nonstop visual feedback, today’s interfaces allow children to run game software that is more complex and has more functionality than any CAD program.

Similarly, today’s users of consumer software expect to move data from one application to another seamlessly. Documents which in the 1980s had to be transferred using finicky import/export processes are now available with a simple copy/paste or email attachment. Open standards are commonplace: HTML, PDF, MPEG, MIME, XML, SQL—the list goes on. Consumer software that is not compatible with standards does not do well in the marketplace. On the other hand, CAE software vendors tend to focus on raw technology and performance as a first priority, putting user experience second. This tendency goes well beyond just microwave CAE software and into the industry as a whole.

The lack of interoperability in PDK’s compounds this problem, forcing engineers to decide between using a single tool and risking difficulties during data transfer. As a result simulator technology still remains relatively underutilized despite all of the advances and technological breakthroughs, contributing to many of the manual processes that consume engineering time.

Now instead, picture a microwave CAE environment which uses an adaptive interface. This software would anticipate the next commands you might want and put them at your fingertips. When you are ready to simulate your design, you may discover that the software has already tried a few simulations (maybe a lot as parallel computing becomes mainstream) with combined EM, circuit, and system simulations just a mouse click away. The software works automatically with measurement hardware to verify overall design performance as you develop prototypes. Since this environment uses a common database with interoperable libraries, this interface continues seamlessly throughout your flow as you move IP from one vendor’s tool to another.

Fortunately, from my perspective, our CAE industry appears poised for some major advances in the next few years. Many companies, such my employer Agilent, are enhancing their products with these goals in mind. I encourage readers help shape these products by talking to their existing CAE vendor about their particular needs for interoperability and usability.

References

[1] "Non-Linear Simulation on Every Desktop," Microwave Journal, August 2000.
[2] "Integrated Software for Electromagnetic Simulation," Rob Lefebvre, Microwave Journal, November 1998, pp. 136-140.
[3] "The Current State of CAD - A Users' Perspective," Microwave Engineering Europe, November 1999.
[4] “Group’s ‘interoperable’ analog flow turns up heat on Cadence,” Mark LaPedus, EE Times, June 16, 2008.



The Importance of Average vs. Peak Performance in Cellular Wireless

November 2008

Moray2 
 
 
 


Moray Rumney obtained his BSc in Electronics from Heriot-Watt University in Edinburgh in 1984. He then joined Hewlett-Packard / Agilent Technologies and has worked for them for 24 years. During this time Moray has been involved in the architecture and design of the RF signal generators, signal analyzers and system simulators used in the cellular communications R&D and manufacturing test industries. Moray started his wireless standardisation activities for HP in 1991 when he joined ETSI and contributed to the standardisation of the GSM air interface and type approval tests. In 1999 he joined 3GPP and has been a significant contributor to the development of the W-CDMA radio specifications and corresponding conformance tests. This standardisation work has evolved to incorporate HSDPA, HSUPA and now LTE. In addition to standards work Moray is a technology advisor within Agilent responsible for Agilent's next generation RF and system simulator products.
 

When I was growing up my favourite book was undoubtedly the “Guinness Book of World Records.” I used to dip in and out of it searching for the craziest extremes of human experience, be they through human achievement or simply the result of extremes of natural human variation. It was not until many years later that I discovered a much less well-known but arguably much more important book called the “Mackeson Book of Averages.” Mackeson is also a brewer of stout and hence rival to the much larger Guinness Company so they thought they would poke fun at Guinness’ second most famous product being the Book of Records. It has to be said that averages are far less exciting than records, but there is also a human fascination in comparing ourselves to the norm of society, be it salary, height, or size of some other part of our anatomy. The other thing in favour of a book of averages is that it does not need to be reprinted nearly so often since the averages change much more slowly than peaks.

And so it is with wireless. An examination of the growth in peak data rates from the introduction of GSM in 1992 up until IMT-Advanced arrives around 2015 shows a staggering 100,000x growth from 9.6 kbps to around 1 Gbps. This growth is plotted in the top black line of Figure 1. To the casual observer this is proof that Moore’s law must be driving wireless performance, but with the doubling occurring every 16 months rather than two years. If we look further, this growth in peak data rates is driven by two factors: an approximate 100x increase in peak spectral efficiency and a 1,000x increase in channel bandwidths. The peak efficiency gains are primarily due to higher order modulation, more efficient channel coding and techniques such as adaptive modulation and coding (AMC) and Hybrid Automatic repeat Request (HARQ).

Figure 1 Growth of peak data rate and cell capacity.

This all sounds very impressive, but there is a snag in turning these peak figures into typical or average performance. Once an electromagnetic signal is launched into the ether it behaves according to the electromagnetic propagation theorem developed in the 1860’s by James Clerk Maxwell, a son of my home town in Edinburgh, Scotland. In these days of ever decreasing cell sizes his infamous equations - which tormented us as students - predict signals will often travel further than we might want and into the next cell. At this point Claude Shannon enters the picture. He along with Hartley in the 1940’s developed the Shannon-Hartley capacity theorem which predicts the error-free capacity of a communications channel as a logarithmic function of the signal to noise ratio (SNR). In cellular systems the dominant source of noise is unwanted signals from other users, primarily from adjacent cells sharing the same frequency and time. The difficulty this creates is that most of the techniques used to drive up peak spectral efficiency also rely on ever higher SNR which means the peak spectral efficiency figures can only be realised in an ever-shrinking area of low interference near the cell centre.

A CDF showing the typical distribution of interference in an urban macrocell is given in Figure 2. The median geometry factor, being the ratio of wanted to unwanted signals plus noise, is 5 dB yet for many high efficiency scenarios, geometry factors of 10 dB, 15 dB and even higher are required. However the distribution shows these levels are only experienced by <10% of the population. Conversely, users further out, and particularly those unfortunate enough to be on a cell boundary, experience much worse performance than is possible at the cell centre. Like it or not, this variation in performance is a law of physics which will not change until someone invents the spatially aware electromagnetic wave. This would conveniently stop in its tracks at the nominal cell boundary, or better still, bounce back in again to create the rich multipath needed for MIMO. A faraday cage around the cell would achieve this but there are implications on inter-cell mobility.

Figure 2 Distribution of geometry factor in typical urban cellF.

Back in the real world it is a fact that the average efficiency of the cell remains much lower than the peaks. The average efficiency is a highly significant quantity since when multiplied by the available spectrum, it predicts the capacity of the cell. The red trace in Figure 1 plots the growth in average efficiency alongside the blue trace which shows the growth in available spectrum1. Both traces are normalized to single-band GSM in 1992. The product of these represents the growth in cell capacity and is shown in the yellow trace. For the period from 1992 to around 2002 when EDGE was established, the growth in system capacity matched the growth in peak data rates, which over that period was a massive 50x comprising 7x gains in both efficiency and spectrum. The actual number of users and their data rates is a further function of the allocated bandwidth per user but if we hold this constant for illustrative purposes this means that the system would support seven times as many users each at a rate of around 70 kbps – quite an achievement.

From 2002 to the present day there have been further rises in average efficiency and spectrum but a 10x gap has grown between the cell capacity and the peak rates. This means that in a loaded cell the typical user will on average experience only 10% of the new higher peak data rate. From the present day through 2015 this gap grows to around 90x meaning the typical user will experience just over 1% of the peak data rates for which the system was designed. In motoring terms that’s like being sold a supercar that can travel at 180 mph but when you take it out on the road the network conditions restrict you to 2 mph. The exception would be if you were the only user on the road (only user in the cell) and came across a nice straight bit of road (excellent radio conditions) - then you could put your foot down and get what you paid for! For the rest of the time however you would have to sit in traffic moving no faster than cheaper cars designed with a top speed of perhaps 10 mph. Offering users the possibility of high performance, charging them for the infrastructure to support it and then not being able to consistently deliver the rates or coverage does not make for a sustainable business model for mass adoption. In lightly loaded systems much higher peaks will be seen by the lucky few, but this is not sustainable commercially.

So what is to be done? Is mobile broadband really out of reach or are there alternatives? To investigate this lets consider the following HSDPA simulation2. We start with a cell supporting the maximum 15 codes using 64QAM and a mobile with single receiver and an equalizer. This combination has a peak capacity of around 20 Mbps for the lucky single user in the right radio conditions. Next we load the cell with 34 randomly distributed users. What now is the aggregate capacity of the cell and the median throughput per user? Having bought his 20 Mbps-capable phone, Joe Public probably expects such performance irrespective of other users in the cell. His view might suggest an aggregate capacity of 34 x 20 Mbps = 680 Mbps. If only! Using our understanding of the air interface we know that the peak cell capacity is shared amongst the users by a scheduling algorithm. Without further analysis we might then reasonably conclude that the aggregate capacity remains at 20 Mbps providing a median throughput per user of around 588 kbps – quite attractive.

Unfortunately it’s not that simple. The random distribution of users means that very few are in ideal radio conditions and many are in quite poor conditions near the cell edge. Taking into account typical interference distribution this particular simulation concluded that the aggregate cell capacity was only 1.3 Mbps (0.26 bps/Hz) giving a median data rate of only 40 kbps per Joe. That’s less than dial up!

Let’s now consider adding a femtocell layer to the macrocell. In the simulation 96 femtocells, each with the same 20 Mbps peak rate as the macrocell, were randomly distributed. This resulted in 24 of the mobiles switching from the macrocell to the femtocells, the other 72 femtocells remaining unconnected. The result on performance is quite staggering. Twenty-four mobiles now have their own private base station in close proximity enabling the median data rate to rise a staggering 200x to 8 Mbps. The aggregate throughput of the macrocell and femtocells is now 270 Mbps. The 10 mobiles remaining on the macrocell see their median throughput rise from 50 kbps to 170 kbps – so everyone is a winner. A CDF showing the distribution of throughput with and without femtocells is shown in Figure 3, with expanded detail in Figure 4.

Figure 3 Distribution of throughput for cell with and without femtocell layer.

Figure 4 Expanded distribution of throughput for cell with and without femtocell layer.

This simulation clearly demonstrates how femtocells can dramatically boost data rates and overall system capacity for nomadic users. Clearly there remains room for improvement in the macrocell with enhancements such as interference cancellation, receive diversity, spatial multiplexing (MIMO) and beamforming, as well as gains from further spectrum growth. Even so, the upside from all these techniques, many of which are costly to implement, complex to operate and power hungry, might be in the order of 6x, and can’t begin to compete with the potential released by the divide and conquer approach of femtocells.

Conclusion

It is relatively easy to increase the peak performance of wireless, much like the top speed of cars on an open road. The challenge however is to improve the average. In motoring terms this would be like trying to double the average speed of traffic during rush hour – a task involving redesign of the entire transport ecosystem rather than just the design of a car itself. The days when improving macrocell average efficiency was relatively easy might be nearly over. However, adding user-financed femtocells to the network which benefit from improved radio conditions is a very attractive alternative for providing a true broadband experience for the nomadic user. The offloading of traffic from the macrocell also means that those users who need wide area coverage and mobility will also see improved performance. Finally, although femtocells offer a new paradigm in wireless they are not without their challenges. Interference mitigation, security, backhaul neutrality and the business model are some of the more prominent issues the industry needs to solve before femtocells become a preferred solution.

References

1. Spectrum and efficiency figures vary widely by geography. These figures are broadly indicative of the industry. Spectrum growth is based on a typical European model and includes growth identified at WARC-07.
2. 3GPP RAN WG4 Tdoc R4-081344



X-parameters: Commercial implementations of the latest technology enable mainstream applications

September 2009

David Root Photo

Dr. David E. Root received BS degrees in physics and mathematics, and, in 1986, his PhD degree in physics, all from MIT. He joined Hewlett-Packard Co. (now Agilent Technologies) in 1985 where he has held both technical and management positions. He is presently Principal Research Scientist and Modeling Architect at Agilent Technologies’ High Frequency Technology Center in Santa Rosa, CA. His current responsibilities include nonlinear behavioral and device modeling, large-signal simulation, and nonlinear measurements for new technical capabilities and business opportunities for Agilent. David was elected IEEE Fellow in 2002 “for contributions to nonlinear modeling of active semiconductor devices.” .

To comment or ask Dr. Root a question, use the comment link at the bottom of the entry. The first 5 people to comment will receive a copy of the Electrical Engineering Handbook (please include your e-mail and mailing address).

Introduction

X-parameter technology has developed rapidly since its pioneering introduction by Agilent with the Nonlinear Vector Network Analyzer (NVNA) in 2008. See [8] for a recent introduction. Agilent provides a complete set of mainstream interoperable SW and HW tools based on X-parameters that are already redefining how the industry characterizes, models, and designs nonlinear components and systems. Several real customer applications are presented illustrating the power and ease-of-use of X-parameters to solve a broad spectrum of important industry problems where modern components exhibit both high-frequency and nonlinear behavior. Moreover, with newly available simulation-based X-parameter design flow capabilities in Agilent ADS 2009U1 and significantly augmented NVNA-based X-parameter measurement capability, the many benefits of the X-parameter paradigm are now extended to a much wider set of components and customer applications than ever before.

Integrating Power Amplifiers into Cell Phones

The drive for improved battery life in personal communication devices requires the constituent power amplifiers to operate more efficiently. The price of efficiency is nonlinearity, namely the generation by the PA of distortion products in-band and also at harmonics that can interfere with the proper functioning of the cell phone. A critical problem for the industry is how to easily integrate such PAs into a handset and ensure, at the design stage, that the amplifier will still meet the overall system specifications when it interacts with other components, such as additional amplifiers or the antenna, in the phone.  A concrete example of this problem is a dual-band GSM / Edge PA amplifier manufactured by Skyworks for integration into a cell phone manufactured by Sony-Ericsson [1]. Sony-Ericsson needed to characterize the effects of the amplifier output mismatch at the fundamental frequency and its implications for both power added efficiency (PAE) and levels of second harmonic distortion produced by the amplifier at the output. Without expensive, cumbersome, time-consuming, and ultimately impractical harmonic load-pull characterization, there was no systematic way to solve this problem without building and testing the phone. Sony-Ericsson designer Dr. Joakim Eriksson had read about X-parameters from the technical literature and asked Agilent to help by applying this technology to his problem. The Skyworks amplifier is shown in Fig. 1a. An X-parameter model of the full amplifier, including all control pins, was constructed from NVNA measurements. Comparison of the model simulation to data-sheet characteristics is shown in Fig. 1b. Sony-Ericsson used the IP-protected model to predict the output match as a function of the phase of signals incident into to the GSM_output port while the amplifier was driven hard into compression. The harmonic levels of distortion components produced were also simulated. The prediction of the X-parameters for the mismatch under drive is illustrated by the red elliptical shape in Fig. 1c. The previous best industry-standard methodology (“Hot S-parameters”) is shown in blue. Independent validation measurements using the NVNA are the colored symbols. The bottom line is that X-parameters predict mismatch under large input drive, Hot S22 does not.”

 

 

Fig1

X-parameters solved the Sony-Ericsson problem. The process of characterizing the amplifier and two others from different manufactures on the NVNA, extracting the X-parameters, creating the complete PA model, and predicting the mismatch and other FOMs in ADS, took three days. The data acquisition time at the customer site using racks of equipment including load-pull systems took one month. Moreover, the X-parameter solution was much more complete.  It provided a fully functional, measurement-based nonlinear model of the amplifier that could be freely shared without compromising IP, and re-used for a much wider range of applications and computations in ADS. Dr. Eriksson was so impressed with the benefits and new capabilities that he exclaimed, “We didn’t think this was possible!”

The conclusion is that X-parameters enable predictive nonlinear design of important nonlinear systems from fundamentally nonlinear constituent building blocks. X-parameters solve, now, important industry problems, more completely, with more benefits, and in a fraction of the time it would take to deploy much less comprehensive industry standard solutions. This is why major companies are working to integrate X-parameters into their mainstream characterization, modeling, and design flows. As an example about how component providers are moving, Agilent Technologies will selectively offer GaAs and InP MMICs to the external market with accompanying X-parameter models. In fact, both the HBT amplifier (Agilent part number HMMC 5200) and the integrated InP 50GHz mixer (Agilent part number 1GC1-8068) will be among the first ICs available with X-parameter models.  See www.agilent.com/find/mmic for more information.

The X-parameters of a component allow system integrators to design-in the part and compare how well (or how poorly) the part works in the system. When used in ADS, the X-parameters serve as a fully interactive, “nonlinear electronic data sheet” that provides dramatically more component information necessary for large-signal applications than can be provided by stacks of paper or an Excel spreadsheet. Using X-parameters in ADS eliminates expensive and time-consuming bread-boards of the actual component. The electronic datasheet benefit is also a potential competitive advantage for the amplifier provider, who can provide downloadable “virtual X-parameter samples” of their component to their customers. X-parameters completely protect the IP of the component, but are faithful to the actual nonlinear performance (if measured) or fidelity to the models from which they were generated (if generated from simulation).This represents a significant evolution of the electronic eco-system including component manufacturers and system integrators.

X-parameters enable the prediction of nonlinear figures of merit (FOMs) of cascaded nonlinear interacting functional blocks, as in the Sony-Ericsson example. Let’s take adjacent channel power ratio (ACPR) as a specific case. ACPR is a scalar FOM. It is not generally possible to predict, say, ACPR of an entire chain of nonlinear components from knowledge only of the ACPR of the constituent parts. X-parameters contain the vector (magnitude and phase) properties of distortion from which predictions can be made, using ADS, about how components interact and how distortion propagates through chains of nonlinear components. With X-parameters, it is possible to predict, using ADS, not only the ACPR of the component, but also how the ACPR varies due to mismatch effects that it might encounter when inserted into a circuit or system design. In fact, X-parameters enable the cascading of nonlinear components just as S-parameters do for linear components. Therefore, the overall nonlinear FOMs of a system can be computed with high accuracy in the design stage, from knowledge only of the X-parameters of the constituent nonlinear components. This is a game-changing proposition for nonlinear design.

Generation X: Second Generation X-parameter technology

Simulation-based X-parameter design flow: high-fidelity IP-protected behavioral models for hierarchical design and simulation speedup.

With the release of second generation X-parameter technologies, there are now two complete bottom-up design verification flows available from Agilent. These are depicted in Fig.2. A comparison of major advantages of both the measurement-based and simulation-based X-parameter design flows are indicated in Fig. 3.

 

Fig2

 

Fig3

Generating X-parameters from schematics

The new simulation-based X-parameter design flow in ADS2009U1 provides a host of additional benefits addressing long-expressed but previously unmet customer design and simulation needs. Nonlinear RF circuits and systems can be extremely complicated, containing hundreds or even thousands of nonlinear components. Simulating an entire circuit at the transistor level of description may not be possible, given the complexity of the thousands of nonlinear equations. Even if the entire circuit can be simulated, the simulation may be so slow as to preclude or limit the designer’s ability to efficiently optimize its performance. Now, with ADS2009U1, it is possible to apply X-parameters as a hierarchical design enabler directly within the simulator. A new “X-parameter generation” capability is built-in to ADS2009U1 that allows the user to convert their complicated component models from schematics directly into X-parameters! This enables the performance characteristics of a sensitive design to be captured, and be sent to potential customers with complete IP security and fidelity of function. The new ADS capability is sufficiently general to generate multi-tone and multi-port (mixer and converter) simulation-based models. An example is given of an actual InP integrated 50 GHz mixer (Agilent part # 1GC1-8068) in Fig. 4. This circuit contains over 40 heterojunction bipolar transistors realized in Agilent’s proprietary InP IC technology, each of which is represented with an Agilent HBT model [2]. The accuracy of the X-parameter representation compared to the detailed circuit-level model is typical. Moreover, the X-parameter model maintains significant accuracy compared to the circuit model even under significant mismatching of the IF port.

 

Fig4

Improved X-parameter simulation component for dramatic speed improvement

A significantly improved X-parameter simulation component in ADS2009U1 can now take full advantage of the inherent speed of X-parameters. X-parameters are inherently fast because they describe the component behavior in the mathematical langue native to the simulation algorithms used to solve the nonlinear problem most efficiently [3], in this case harmonic balance and circuit envelope analysis. In some cases, simulation speedup by a factor of 100 has been achieved by replacing complex circuits and “compact” transistor models with X-parameters. The X-parameters are high-fidelity behavioral representations of the models from which they are generated. In fact, X-parameters can effectively replace all the various point behavioral models previously offered in ADS and provide many additional benefits. Reduction of complexity while maintaining accuracy enables simulation of larger parts or even the entire design, rather than having to make due simulating only a subset of functional blocks and hoping their mutual interaction can be ignored.

Prior even to fabricating a device such as a PA, it is possible to start designing systems around it by starting from circuit-level models of the component, then converting them into X-parameters and designing efficiently at the next level of abstraction. Eventually, when the component is actually manufactured, it is possible simply to substitute actual NVNA- measured X-parameters for the virtual X-parameters to provide a bottom-up detailed measurement-based verification.

Hierarchical design of nonlinear RF systems with X-parameters

X-parameters enable a hierarchical nonlinear design flow, for which there is no generic equivalent.  It is quite analogous to what is common practice for S-parameters in the design of linear systems from linear components. For example, the X-parameters of individual amplifier stages can be combined to produce a single X-parameter representation of the cascaded structure. This in turn can be combined with X-parameters of a mixer or converter and the entire front-end of an RF nonlinear system can be hierarchically extracted and reused. An example of an RF system designed with a  measurement-based amplifier model from an actual Agilent HMMC 5200 HBT amplifier, and a simulation-based X-parameter model of an actual Agilent 1GC1-8068 InP 50GHz Mixer is is shown in Fig 5.

 

Fig5

Second Generation X-parameter measurement capabilities

Load-dependent X-parameters

It is often desired to design matching networks for high-power transistors and PAs so as to optimize scalar performance FOMs such as power delivered and power added efficiency. High power transistors have characteristic output impedances closer to 1 ohm that the typical 50 ohm environment of traditional VNA-based receivers, so the measurement is more complicated. Traditionally, load-pull has been the measurement methodology of choice for such purposes. However even with complete load-pull data it is not generally possible to generate full two-port nonlinear functional block models of the component for generic design purposes. For example, classic load-pull does not provide sufficient information to design and optimize multi-stage amplifiers where accurate input-to-output phase and scattered waves including harmonics at the input port are required.  By enabling X-parameter measurements to work seamlessly with automatic tuners, X-parameters systematically solve these problems and provide much more comprehensive component information immediately usable in the ADS simulator for nonlinear design. “The data is the model.”

Earlier this year, Agilent and channel partner Maury Microwave teamed up to introduce another industry breakthrough: arbitrary load-dependent X-parameters. This is an interoperable SW/HW solution involving Maury ATS load-pull SW, Maury load tuners, new Agilent NVNA option 520, and the enhanced X-parameter simulation component in ADS. A picture of the HW is shown in Fig. 6. The Maury SW runs on the Agilent PNA-X based NVNA. Simple graphical input allows complex load states to be specified throughout the Smith chart. X-parameters are measured, and using embedded Agilent IP, calibrated for uncontrolled harmonic impedances presented to the DUT by the tuner and corrected for any imperfection in achieving desired gridded impedance states. A complete nonlinear two-port functional block X-parameter model representing the component’s nonlinear behavior versus power, frequency, complex load, and bias is instantly created from these measurements. A simple drag-and drop file transfer is all that is needed to begin immediate nonlinear design of matching circuits, multi-stage amplifiers etc. The seamless link between advanced nonlinear measurements and nonlinear design capability prompted Gary Simpson, Director of RF Device Characterization at Maury Microwave Corporation, to proclaim this commercial solution “a breakthrough for the industry.”

 

Fig6

This solution is highly automated, extremely accurate, and provides much more benefit than conventional load-pull. It reduces to S-parameters in the small-signal limit. Unlike conventional load-pull, it includes full input-to-output phase information and the magnitudes and cross-frequency phases of harmonics as well. Not only is the new X-parameter solution a superset of S-parameters and Load-Pull, it provides a much more comprehensive instant generic large-signal model for design in ADS.

A concrete example of applying arbitrary load-dependent X-parameters to a commercial packaged transistor is shown in Fig. 7. The X-parameters are able to predict the detailed current and voltage waveforms of the transistor at any impedance over the entire Smith chart! The model can be cascaded even under very strong mismatch conditions and predict the effects of inter-component interaction – perfect for multi-stage design.  This new design approach is complementary to conventional active device models. It is especially attractive where there are no good “compact” device SPICE or ADS transistor models available, such as the case for novel technologies (e.g. GaN) or new component realizations [5,6]. This approach enables measurement-based simulation of time-domain waveforms under very strong compression at virtually any impedance. For the first time, practical commercial measurement tools can provide the information that large-signal simulators produce; it is essentially “experimental harmonic balance.”

 

Fig7

As an added bonus, the capabilities of arbitrary load-dependent X-parameters are so powerful that they can also predict the effects of independently tuning the harmonic terminations of the components, even though these terminations are not independently controlled during the characterization process [7]. This is validated in Fig. 8 for a 10W GaN transistor. This examples demonstrates that for many high power device and amplifier applications, it is not necessary to use time-consuming, expensive, harmonic load-pull systems which require many more load states (each load at each port at each harmonic controlled separately) to obtain the sensitivity of device performance to harmonic terminations. This is another case where X-parameters cause customers to say “we didn’t think this was possible.”

 

Fig8

Multi-tone X-parameter capabilities, already available in ADS2009U1, will soon be available as an application on the NVNA. This will enable the magnitude and phase of tone-spacing dependent intermodulation distortion to be characterized, and be used immediately by the ADS2009U1 X-parameter simulation component. This calibrated nonlinear cross-frequency vector distortion information can be used for designing distortion cancellation circuits and apply other design principles, such as derivative superposition [9], that previously could be applied only if there was confidence in accurate nonlinear device models. Extending the NVNA to measure three-port devices, such as mixers and converters, is also underway. This capability will fundamentally change the way these foundational components are characterized and designed into RF systems.

Additional X-parameter measurement enhancements

NVNA instruments now are available in 13.5 GHz, 26 GHz, 43.5 GHz, and 50 GHz versions. X-parameters can therefore be measured to twice the frequency that they could be measured at the introduction of the original NVNA in 2008. Moreover, with a new Agilent applications note, customers can now measure X-parameters on power devices up to 250W! This makes the benefits of X-parameters applicable to market segments including base station amplifiers and high-power transistors.

X-parameter based transistor modeling

X-parameters offer a significant value by providing a complimentary approach to transistor modeling, compared to the traditional physically-based or empirical “compact” models. Compact models, such as the Berkeley BSIM 4 MOSFET [10] model and the Agilent HBT [2] compound hetero-junction bipolar transistor model are very comprehensive models with scores of nonlinear equations. They each have over 100 parameters that must be extracted to associate the model with a given process technology. Accurate state-of-the art models take years to develop, and can then takes days or weeks to properly extract. There is an urgent need for fast, accurate, and easily extractable nonlinear models from measurements of devices for which there is not a good compact model. This is especially true in new technology areas, such as GaN. Fortunately, there is a simple X-parameter based procedure that provides an attractive alternative. Simply measure the X-parameters of the component on the NVNA, drag-and-drop the resulting file into ADS, and you’re off designing nonlinear circuits immediately. Figures 7 and 8 are examples of such models.  Another example from an NVNA and X-parameter customer at National Nano Device Labs in Taiwan is the extraction of X-parameters from a novel annular Si transistor for which there was no available model. The results were reported at the International Microwave Symposium in June, 2009 [5]. The X-parameter model demonstrated an excellent prediction to intermodulation distortion measurements over a wide range of input power, and also predicted very well the detailed time-domain distorted waveforms under very large-signal excitations. Results were reported by Guyan et. al at the2009 IEEE ARFTG conference validating arbitrary load-dependent X-parameters for a GaAs MESFET transistor under WCDMA stimulus [6].  These examples illustrate the power of X-parameters as an accurate, technology independent device modeling approach. With X-parameters, there is no need to wait for a Ph.D. expert to implement and debug a new compact transistor model. There is no need to spend days or weeks of a modeling engineer’s time to extract the hundreds of parameters of a conventional model in order to design with the component. X-parameters are much easier, more automated, and more repeatable to extract from measurements on the NVNA than standard compact models are to extract from DC and linear S-parameter measurements. Moreover, measurement-based X-parameter models are extremely accurate because the nonlinear data, properly characterized by the NVNA, are the basis for simulating the component behavior when used for design in ADS.

Conclusions

X-parameters have moved from exciting research demonstrations to mainstream commercial measurement instruments (Agilent NVNA) and EDA design tools (Agilent ADS) [11].  Interoperable NVNA-based X-parameter measurements and simulation-based X-parameter design flows in Agilent ADS provide the same ease-of-use as familiar linear S-parameters but with unprecedented power and much greater benefits. X-parameters unify linear S-parameters, nonlinear load-pull, and modern wave-form measurements for more complete nonlinear characterization and predictive nonlinear design of RF and microwave components and systems. Agilent has developed industry-leading products for each piece of the nonlinear puzzle, with extensive built-in IP, and designed them fit together, seamlessly. Dramatic time and cost savings have been realized using X-parameters to do familiar things better. Completely new capabilities engendered by X-parameters enable novel characterization, design, and verification approaches, providing substantial competitive advantages to customers who both create and consume nonlinear components from transistors to RF and microwave nonlinear systems.

Acknowledgement

The author thanks the extended Agilent X-parameter team for their contributions and Agilent management for support.

References

 

[1] J. Horn, J. Verspecht, D. Gunyan , L. Betts,  D. E. Root, and Joakim Eriksson, “X-Parameter Measurement and Simulation of a GSM Handset Amplifier,” 2008 European Microwave Conference Digest Amsterdam, October, 2008

[2] M. Iwamoto and D. Root, Agilent HBT Model: Overview. Compact Model Council Meeting, December, 2006 http://www.eigroup.org/cmc/minutes/4q06_presentations/agilent_hbt_model_overview_cmc.pdf

[3] D. E. Root, J. Wood, and N. Tufillaro, “New Techniques for Non-Linear Behavioral Modeling of Microwave/RF ICs from Simulation and Nonlinear Microwave Measurements,” in 40th ACM/IEEE Design Automation Conference Proceedings, Anaheim, CA, USA, June 2003, pp. 85-90

[4] G. Simpson, J. Horn, D. Gunyan, and D.E. Root, “Load-Pull + NVNA = Enhanced X-Parameters for PA Designs with High Mismatch and Technology-Independent Large-Signal Device Models,” IEEE ARFTG Conference, Portland, OR December 2008

 

[5] Chiu et al “Characterization of annular-structure RF LDMOS transistors using polyharmonic distortion model,” in IEEE MTT-S International Microwave Symposium Digest, 2009 pp 87-90.

 

[6] D. Gunyan et al, “Nonlinear Validation of Arbitrary Load X-parameter and Measurement-Based Device Models,” IEEE MTT-S ARFTG Conference, Boston, MA, June 2009.

 

[7] J. Horn et al, “Harmonic Load-Tuning Predictions from X-parameters,” IEEE PA Symposium, San Diego, Sept. 2009

[8] D. E. Root et al “X-parameters: The new paradigm for measurement, modeling, and design of nonlinear RF and microwave components,” Microwave Engineering Europe, December 2008 pp 16-21.  www.mwee.com               

[9] Webster, D.; Scott, J.; Haigh, D; “Control of circuit distortion by the derivative superposition method,” IEEE Microwave and Guided Wave Letters, Vol 6, no. 3, March 1996 pp123-125.

[10] http://www-device.eecs.berkeley.edu/~bsim3/bsim4.html             

[11] http://www.agilent.com/find/nvna  and http://www.agilent.com/find/eesof-ads2009-update1

 

Solving the RFIC Design for Yield and Verification Dilemma

July 2010

PaulcomstockEA 
 
Paul Colestock has over 20 years of experience in wireless and high speed semiconductor product and technology development, EDA and marketing.  He is currently RFIC EDA Product Planning and Marketing Lead for Agilent EEsof focused on helping position the company as the pre-eminent EDA supplier for wireless RF subsystem design and analysis. He has held leadership positions at Cadence Design Systems, Jazz Semiconductor, Hesson Labs (co-founder), and Silvaco International.

 

 

To comment or ask Paul Colestock a question, use the comment link at the bottom of the entry.

Abstract

This article presents the role and evolution of simulation-based performance verification and yield for today’s highly integrated RFICs for digital wireless communications. Along the way, we will look at this problem from the foundry, EDA and designer perspectives to hopefully give a comprehensive picture of where we are today and what choices exist to improve RFIC verification and design for yield.

Introduction

My first experience with real RF design was as an electrical engineering student intern at Texas Instruments (TI) during one summer in the early 1980’s. I was anxious to put my HP-41CV, running a program based on Allen and Medley’s “Microwave Circuit Design Using Programmable Calculators,” to work on a real microwave circuit design. Instead, I was introduced to a then new computer-aided-design program called Touchtone [23], which ran on a TI PC. That was the good news. The bad news was that in order to make sure the circuit worked as simulated, I had to first figure out where to get the latest version of the MESFET model. Fortunately, the GaAs fab was co-located with the circuit design team so it seemed like a good place to start. But instead of a model file, I was handed the “golden” wafer of the day and shown to the test lab where I could take all the data I wanted to build my own Spice model. How did we handle yield for our finished products? Every part we delivered was tweaked and tuned by hand.

Just imagine this scenario for modern CMOS RFIC designs. I think we lost the tall thin RF design engineer around the 0.25um CMOS node. The point of this story is that if things had remained the same we never would have realized such explosive growth in the wireless IC market. Luckily, over the years, the RF design process has evolved. The advances in EDA, semiconductor manufacturing, design, and test have yielded great commercial and consumer benefit, but it has also created divisions between the various design tasks, and the information and knowledge needed to keep pace with the demand for faster, smaller, cheaper, and more connected wireless devices. The dilemma over how to best verify advanced RFICs and design-for-yield is a direct consequence of this evolution.  Today’s complex high-performance, low-power, low-cost, and high volume RFIC requirements continue to put tremendous strain on the ecosystem to provide a solution to this dilemma. 

Simulation at the Center

Figure 1 depicts the various sources of requirements and information needed as part of the RFIC design and verification process. Let’s look at the importance of each of these and where we are with respect to integrating and using the information necessary to enable proper verification and design-for-yield at the RFIC design level. 

 

 Design centric view


 

Figure 1. Shown here is the simulation-centric RFIC design ecosystem.

System/Product Design

System- and product-level design tools for electronics, or the Electronic System Level (ESL) category of Electronic Design Automation (EDA) tools, are available from various vendors. However, few are focused on accurately modeling the entire wireless RF subsystem as shown in Figure 2. Most of the attention is either focused outside this subsystem entirely, or only on baseband algorithmic development or other DSP functions that may reside on radio itself.

In this paradigm, simple models are used for the radio and RF front-end blocks. While more specific tools for the RF architecture design (radio and front-end module) exist, more often than not spreadsheets or numerical programming languages seem to be the tools of choice. There are two main weaknesses to this scheme. Where general tools are used, the information flow between system and circuit design is manual and iterations targeted at improving either performance or yield requirements are prohibitively time consuming. On the other hand, where more RF specific tools are used [1], the lack of a standard interface for signals and specifications between system and circuit simulation environments limits the development and business model for developing standards-based or custom wireless verification intellectual property (IP). It also means that performance information captured at the circuit level has little chance of being used for overall system verification.

An emerging standard for simulation- and measurement-based nonlinear models [2] offers a glimmer of hope for helping in the design of RF front-end building blocks, and it seems to be gaining both momentum and support [3] [4]. Although theory supports extending the use of these models for transceiver functional path modeling, the implementation and operational aspects required for a successful solution are still being worked out. One thing is certain, until RFIC verification follows the path of digital and builds verification teams to write behavioral models, simulation- and measurement-based approaches seem to be more tractable for the RF designer to accommodate.

  

 

 

 

  

Figure3

Figure 2. A simplified wireless RF subsystem.

Packaging/Module Development

Until somewhat recently, RFIC designers have always found ways to either include or avoid, “by design,” the respective chip/package/module/board interactions that could render their product (assembled from well-designed individual pieces) unusable. What has changed is the market demand for highly functional, small form factor, wirelessly connected consumer devices.  Not only is the demand high, but since consumers are willing to pay a premium for these “smart,” connected devices, manufacturers are now more incentivized to produce them [5]. The old engineering paradigm of separation of variables, and divide and conquer just doesn’t cut it anymore.

Remember RF “keep out” regions under critical areas of potential IC to package/board coupling? The evolution of flip chip technology, die stacking and wafer-level packaging, combined with small form factor area requirements [6] have driven deeper analysis of how to accommodate these technologies. The lack of a unified 3D, high-capacity EDA solution that crosses the chip/package/module/board boundaries is problematic. I have heard anecdotally that at the RF subsystem level, dozens of placement-centric circuit iterations are required to build prototypes, let alone a working product.  While there have been a few good initial attempts at addressing various parts of the problem [7], one big issue still remains: a unified design database between RF SOC and package/module/board design environments.

As the industry moves forward with the adoption of Open Access [8] (OA) as a standard electronic design database, this particular issue may be solved. This represents forward movement on the custom IC, microwave and MMIC fronts [9]. However, until mainstream packaging/board vendors deliver OA-based products, and current interoperability efforts [10] [11] extend to include their requisite data structures, and package houses deliver IC-like process design kits (PDKs), the design community will continue to create their own RF-centric models to analyze the effects of bond wires, bumps, thru-wafer vias, off-chip components, and high frequency/data rate interconnects the best way they can. The expense of multiple iterations continues to plague the industry.

IC Manufacturing

Probably the most critical input for RFIC design and verification are the process design kits (PDKs) delivered by captured fabs and commercial foundries. These continue to be the bedrock that all RFIC design and verification rests on. Many, if not most, mainstream fabless RFIC chip design companies augment these kits to create a more RF-accurate and design-specific representation of the manufacturing data from their own experience and measured data. While there has been a lot of improvement in the support of RF-centric models [12] [13], RF-specific process technology and RF-oriented foundry EDA programs [14] [15], when it comes to RF verification and yield, the support of statistical Spice models has not been enough to convince RFIC designers to embrace them.

There are various reasons for this response from RFIC designers, including: model/process alignment, the cost of licensing to support comprehensive verification, time constraints on product schedules, and the value of the information derived. Let’s take a closer look at model/process alignment.

After much development time, expense and pre-production silicon with key customers, a given process technology is deemed ready to release for production only when process and electrical parametric variations meet the release criteria. This is usually some mean value and number of standard deviations of the distribution for each of parameters the foundry tracks. These distributions are the foundry’s guarantee that the process is under control and are also used for determining whether to re-process wafers at certain steps (e.g, chemical mechanical polishing or photo) or to reject wafers at final electrical probe. It’s important to note that in most cases, the statistical Spice models are usually aligned with this release.

The simplest process indices used for describing the measured vs. target specification alignment are Cp and Cpk [16] [17]. In simple terms, Cp is the amount of measured specification variation and its relationship to the target spread between upper and lower specification limits (Figure 3). Cpk is a measure of how centered the measured data is relative to the specifications.

Cp_overview2 
Figure 3. A graphical depiction of the process capability index Cp.

The more measured standard deviations that fit between the specification limits, the tighter the process control. It’s in the foundries interest therefore, to reduce the variability in the process. Doing so creates both margin on wafer yield and the opportunity to tune the mean values of these parameters within the guaranteed specification limits to address product-dependent yield sensitivities. In general, foundries do not update the statistical models to reflect this. Consequently, just like skew or corner models, actual RFICs may never see the process that these models represent.

There are various tactics to deal with this issue, but design managers generally see very little return on investment in this level of verification due to the model alignment and expense in terms of time/licenses. Even if there was continuous statistical Spice model re-alignment and cost issues could be overcome, one question would still remain, “What do you do with the information you obtain?” For key RFIC product specifications, for example, you may know that you have poor yield, but it doesn’t tell you the root cause.

Test

There are many important inputs to the RFIC design and verification flow that are test related. One is from the modeling aspect and we’ve already touched on statistical transistor-level modeling in the previous section. Now let’s focus on two other areas:  behavioral modeling and functional-level testing.

As previously mentioned, nonlinear distortion models [2] are gaining momentum even at the system level [3].  Measurement can play a critical role in situations where a compact model representation does not exist or does not accurately capture real nonlinear effects for an RF component. As a result, test solutions have come to market to address this capability [4]. While transistor-level simulation across technology boundaries is possible (transceiver and RF front-end, for example), a transistor-level approach with inaccurate results is not very useful. The ability to use measurement-based nonlinear distortion models in either system or circuit-level simulations can be very beneficial under these conditions.

A principal critical stage of the product development cycle is product testing, especially on first silicon for a complex RFIC design. It’s usually the first chance you have to see the effect of all the verification runs that you didn’t have time for, the post layout extraction re-simulation issues that never got resolved, or the chip-to-chip or package and board effects that couldn’t be analyzed due to a lack of a comprehensive EDA flow. It is the place where everything comes together and usually where debug begins. Normally, there is little alignment between the test benches used for product design and test. Due to the highly integrated nature of complex RFICs, block-level testing is seldom useful. Instead, system-level RF functional path performance and programmability is what is tested on the actual silicon. One solution is to perform functional path simulations with the same system-level test benches used for testing the hardware. While this pushes the capacity of both RF simulators and available compute resources, self-consistent solutions do exist [18].

RFIC Design and Verification

After reviewing the various inputs, capabilities and limitations from the rest of the RFIC design ecosystem, we can now focus on what RF simulation environments need to support for the design and verification of complex RFICs. For transistor-level functional path simulation to align with system test, it needs to support the same complex modulated sources for the various standards of interest. While multi-tone analysis has been the standard approach for many years, the evolution and complexity of wireless standards is now driving the use of real-world complex modulated signals, especially to see the effect of cross modulation and interference between the various wireless emitters.

4G LTE, for example, requires backward compatibility with W-CDMA as they share most of the same frequency band allocations. The typical approach for handling these types of input signals is some form of envelope transient simulation. While these approaches can handle the complex signaling and the circuit’s nonlinear RF behavior [19], simulation times can still be prohibitive for more complex analysis. 

Advanced “Fast Envelope” methods [20] have been developed to yield a speedup of many orders, with little loss of accuracy. These models are created once at run-time for a particular circuit configuration. This allows the designer to push the envelope with respect to the size of the simulated circuit vs. simulation speed, and enables maximum reuse for additional analysis including yield. Figure 4 and Table 1 give the results of “Fast Envelope” transient simulation for an IS95 up converter from the previously referenced paper.  These advances make use of real-world signaling for verification and yield analysis possible for complex RF circuits and systems.


 Is95upconverter


Figure 4. An IS95 transmitter output spectrum for various fast envelope techniques.

Is95upconverter-speedup 

Table 1.  Fast envelope simulation results for an IS95 transmitter.

While we are on the topic of yield analysis, let’s try and make some sense out of how to deal with statistical models and investigate a more comprehensive methodology for both block and functional path statistical analysis. The primary techniques included in RF simulators that target advanced (Bi)CMOS technologies cover simulating the effects of varying process, voltage, temperature (PVT) and mismatch. Such PVT analysis takes two forms, either worse-case corner or full statistical sampling analysis also known as Monte-Carlo process or mismatch analysis. Keep in mind that process models are global variation models usually representing large variations in process, while mismatch models represent local variations. 

It is technically feasible for foundries to align worse-case corners with the specification limits of the statistical models, and with either sigma or design of experiments-based sampling, the line between these techniques has blurred. What hasn’t changed is the fact that no matter which technique you choose, neither will yield any statistical insight into the sources of variation or how the various circuits interact with respect to their impact on performance. While others recognize the issues [21], they take a very data-, simulation- and modeling-intensive approach to the problem. While they certainly merit further investigation, they do not provide a practical everyday solution for RF designers. Instead, what’s required is a fast and accurate statistical-based variational analysis that can also give the insight that traditional Monte-Carlo process, mismatch and corner analysis cannot. To insure alignment with regular Monte-Carlo, it should also use the same statistical Spice models. This would fill the gap between traditional circuit- and block-level design and full-fledged yield analysis. 

 

An example of the insight that such an analysis [22] could provide is shown in Figures 5 and 6. Here, the Harmonic Balance-based output voltage of a receiver down converter design, including low-noise amplifier (LNA), mixer and bias circuit was analyzed using a fast yield contributor and mismatch analysis technique. The important metric here is not the amount of variation contributed by a block at the output, nor the correlation between the two, but the product of the standard deviation and the correlation. In effect, this is the impact that the various blocks have on the overall output variation for the measure of interest. The longer the bar is, the greater the impact.

  

  

One important item to note is that different blocks can drive the variation in different directions. You can see in Figure 5, for example, that the mixer block’s negative correlation shows up in the results. This is an important insight to understand when taking the next step and trying to reduce the effective output variation (when necessary).

Blocklevelimpactfullgraphic

 

Figure 5. The block-level impact of process variations with fast yield contributor analysis.

In addition to block-level insight, this technique can also provide the same kind of insight all the way down to the device level as shown in Figure 6.  Since the device model parameter link to process is also available, the future may bring a solution for circuit-dependent corner generation.

  

Devicelevelimpactfullgraphic

 

Figure 6. The mixer device-level impact of process variation with fast yield contributor analysis.

This technique is an important tool in understanding sources of variation derived using the standard statistical Spice models for global variation effects. It does not however, replace corner or Monte-Carlo process analysis completely as they represent large global variational effects. For mismatch analysis, where we are dealing with local variations only, this technique can be a direct replacement, offering orders of magnitude improvement in speed. Since most designers perform mismatch and corner analysis together, this can reduce the overall analysis time dramatically. And, because this fast statistical variation and mismatch technique is applicable at the circuit, block and functional path level, it gives RF and mixed signal designers a useful tool that can be used at any stage of the design.  

Summary

This article has described the various sources and inputs needed for RFIC verification and design-for-yield. It also described the current status of the various parts of the RFIC design ecosystem, as well as their strengths and weaknesses. Finally, an overview of a few of the key analyses needed at the RFIC design level to overcome the challenges of verifying highly integrated, high-performance, low-power, low-cost, and high-volume RFICs was discussed. This is a very complex topic and while only scratching the surface, I believe this article provides a good overview of where we are, the choices RF designer have on verification and yield methodology, as well as areas that need improvement.

References

[1]  SystemVue RF System Design Kit.

[2] D. E. Root, “X-parameters:  Commercial implementations of the latest technology enable mainstream applications,”, Microwave Journal, September 10, 2009

http://www.mwjournal.com/resources/ExpertAdvice.asp?page=0&HH_ID=RES_200.

[3] Using Analog/RF X-ParameterModels in System-Level Design

http://www.home.agilent.com/agilent/product.jspx?cc=US&lc=eng&ckey=1455073&nid=-34261.804610.00&id=1455073.

[4] http://www.agilent.com/find/nvna.

[5] http://www.htc.com/us/products/evo-sprint#overview and http://www.apple.com/iphone/.

[6] TECHINSIGHTS, Apple iPhone 4 Teardown

http://www.ubmtechinsights.com/reports-and-subscriptions/outlook-and-analysis/apple-iphone-4/teardown/.

[7] http://www.cadence.com/products/pkg/Pages/default.aspx.

[8] Si2 Open Access Coalition, http://www.si2.org/?page=69.

[9] http://www.si2.org/?page=86.

[10] Interoperable PDK Libraries, http://www.iplnow.com/index.php.

[11] Si2 Open PDK Coalition, http://www.si2.org/?page=1118.

[12] http://www-device.eecs.berkeley.edu/~bsim3/BSIM4/BSIM400/slide/slide.pdf.

[13] A.J. Scholten, G.D.J. Smit, B.A. De Vries, L.F. Tiemeijer, J.A. Croon, D.B.M. Klaassen,
R. van Langevelde, X. Li, W. Wu, and G. Gildenblat, "
The new CMC standard compact MOS
model PSP: advantages for RF applications
,” IEEE Solid-State Circuits, Vol. 44, No. 5, May 2009, pp. 1415-1424.

[14] “TSMC Unveils New 65-Nanometer Mixed-Signal and RF Tool Qualification Program,”

http://www.tsmc.com.tw/tsmcdotcom/PRListingNewsAction.do?action=detail&newsid=2426&newsdate=2007/12/13.

[15] “Agilent Technologies Announces United Microelectronics Corporation Certification of GoldenGate Software,” http://www.agilent.com/about/newsroom/presrel/2010/04jun-em10082.html.

[16] WikipediA, Process Capability Index, http://en.wikipedia.org/wiki/Process_capability_index.

[17] Process Capability Indices – Visual Animation, http://elsmar.com/Cp_vs_Cpk.html.

[18] http://www.agilent.com/find/eesof-deslibs and http://www.agilent.com/find/eesof-goldengate.

[19] E. Ngoya, R. Larcheveque, “Envelop transient analysis: a new method for the transient and steady state analysis of microwave communication circuits and systems,” IEEE MTT-S International Microwave Symposium, pp. 1365-1368, June 1996.

[20] A.Soury, E. Ngoya, “Using Sub-Systems Behavioral Modeling to Speed-up RFIC Design Optimizations and Verification,” Integrated Nonlinear Microwave and Millimetre-Wave Circuits, 2008. INMMIC 2008.

[21] Amit Gupta, “Variation in custom ICs: It’s not just a foundry issue,” Chip Design Magazine, http://chipdesignmag.com/display.php?articleId=4172.

[22] David Vye, “Accelerating Advanced Node CMOS RFIC Design,” Microwave Journal, December 7 2009, http://www.mwjournal.com/search/article.asp?HH_ID=AR_8441.

[23] David Vye, “How Design Software Changed the World”, Microwave Journal, July 2009, http://mwjournal.com/search/article.asp?HH_ID=AR_7817

.

 

 


 

Feel free to post questions and comments here.

Microwave Oscillator Design using the Open-loop Cascade Method

October 2010

Rrhea 
 


Randall Rhea graduated from the University of Illinois in 1969 and Arizona State University in 1973 and worked at the Boeing Co., Goodyear Aerospace and Scientific-Atlanta. He founded Eagleware Corp., which was acquired by Agilent Technologies in 2005, and Noble Publishing, which was acquired by SciTech Publishing in 2006. He has authored numerous papers, the books Oscillator Design and Computer Simulation and HF Filter Design and Computer Simulation and has taught seminars on oscillator and filter design to over 1000 engineers. His hobbies include antiques, astronomy and amateur radio (N4HI). In 2004 he toured 48 states by motorcycle. He and his wife Marilynn have two adult children and reside near Thomasville, GA.

 

To comment or ask Randall Rhea a question, use the comment link at the bottom of the entry.

 

In the Agilent/MWJ Innovations in EDA webcast on “Discrete Oscillator Design Tools and Techniques”, Randy Rhea presented a discrete oscillator design using the Genesys product from Agilent, using techniques outlined in his new book “Discrete Oscillator Design: linear, nonlinear, transient and noise domains”. The method outlined by Rhea begins with a linear analysis as a first step.

 

Its tempting to being with a nonlinear closed loop simulation with oscillation, but in doing so you are no more informed than with a circuit oscillating on the bench. What’s the gain margin? Does the maximally loaded Q occur at the oscillation frequency? What if it doesn’t oscillate at all?

 

Linear analysis provides that foundation. It identifies the design margins, it’s a fast and simple way to study the tuning characteristics, and it can be used for initial estimates of phase noise. It is also quick for exploring new topologies and basic design ideas. More importantly, a linear analysis provides valuable insight into the design process, though it’s a simplified view it provides an intuitive understanding of how oscillators work.

Unfortunately, it only provides a qualitative grasp of the operating power levels, harmonic output or transient behavior. That information can be obtained with subsequent harmonic balance analyses.

 

Two methods are used for linear analysis. A popular method for microwave oscillators particularly for VCO designs is the one-port reflection method. Most authors refer to this technique as the “negative resistance analysis”. Actually, they’re both negative resistance and negative conductance oscillators. The difference is not merely a matter of semantics.  The negative resistance oscillator must use a series resonator and the negative conductance oscillator must use a parallel resonator. The initial analysis of the negative resistance oscillator is performed by looking into the device through a series resonator. To form the oscillator, the test port is removed and that port is grounded.  The initial analysis of the negative conductance oscillator is performed by looking into the device across the top of a parallel resonator. To form the oscillator, the test port is removed and that port is left open. In some circuits, the oscillator does not include a node where the loop can be opened. In this case, the negative resistance or conductance analysis method must be used.

 

 Rrheafig1

The other approach, the open loop method has been preferred for years at lower frequencies, including crystal oscillators. I prefer this method, even for microwave oscillators. It provides more insight into numerous oscillator behaviors. It also allows estimation of the loaded Q, this is a very important measure of oscillator performance. The open-loop cascade technique also avoids some of the confusion that comes with using the one-port reflection or negative resistance/conductance techniques. For this reason, I use the open-loop cascade method to consider a 40MHz Colpitts oscillator with a FET and an L-C resonator, this is a typical of common drain (or common collector) oscillators, where the source (or emitter) is connected to the capacitive tap. For the open-loop method, the circuit is opened between the source and the capacitive tap.  The open –loop input port is at the capacitive tap (loopout) and the cascade output port is at the source (loopin). The capacitor c3, shown in the circuit schematic shown below, is used to couple output power from the oscillator into a 50 ohm load. Capacitor c4 is used to avoid the simulator’s termination resistance from disturbing the oscillator bias. With a common drain amplifier, the output impedance at the source is generally low and the input impedance at the gate is high. The capacitive tap transforms the low impedance of the source up to the high impedance at the gate. In this example, I use an initial loop termination resistance of 50 ohms. One can also “open” the loop at the gate, the impedance at this node is higher and so a higher port resistance would need to be chosen.  

 Rrheafig2

So what are the oscillator starting conditions. Oscillation occurs at the phase zero-crossing if the initial linear gain margin is greater than 0 dB. Oscillation does not occur at the gain peak, it occurs at the phase zero-crossing and the phase slope should also be negative at the zero-crossing. The phase characteristics are more important than the amplitude characteristics. When a change occurs in the transmission phase due to temperature, load or other event, the oscillation frequency will shift up or down. With shallow slope, the frequency shift is large. With steep slope, the shift is smaller. Therefore, the phase slope should be as steep as possible. To take advantage of the phase slope, its desirable if the phase zero crossing occurs at the maximum phase slope.

 Rrheafig3

Too low of a gain margin increases the risk that gain changes might prevent oscillations and also results in slow starting. Too high a gain margin leads to heavy compression and a worsening of the phase noise and potentially oscillator instabilities. 3 to 8 dB is a reasonable gain margin for most oscillators. The amplifier should be stable, it’s the positive feedback from closing the loop that causes oscillation. Amplifier instability can result in spurious modes. To utilize the available device gain, the maximum response gain should occur at our near the phase zero-crossing. This is the least critical characteristic and tends to occur naturally when other objectives are satisfied.

 

The predicted response of the open cascade assumes the cascade is terminated by the simulator, later when the loop is closed, the terminations are provided by the circuit itself. For the analysis to be accurate, the loop matching should be good. The plots below show the transmission and reflection responses of the 40 MHz Colpitts circuit using the open-loop cascade.  The transmission plot on the left shows the gain (in red), which peaks at about 3 dB, the low end of our target. Also shown on this plot is the transmission phase (in blue) or the angle of s21 and the loaded Q (in green). Clearly, the phase crosses zero at the desired 40 MHz. The was previously adjusted the capacitors c1 in the oscillator to ensure that the phase zero crossing would occur at the target frequency of 40 MHz. The loaded Q is computed by the simulator from the transmission phase slope; the higher the loaded Q, the steeper the phase slope. In this case the loaded Q is about 10.

 

The reflection data is plotted as s11 and s22 on the Smith chart. The s22 is reasonable at around 12.7 dB return loss, however s11, the input match at 40 MHz is only about -0.34dB. The mismatch is so poor that the simulated transmission characteristics plotted on the left are significantly are more or less useless. Historically, this problem was managed by redesigning the cascade for better open loop input and output match. Matching network are not added because this increases circuit complexity and potentially adds additional resonances. Rather, the matches are improved by adjusting the capacitors in the tap. Changing the simulator termination impedance to some other value can also be used to improve the simulation match. But maintaining the termination at 50 ohms makes it easier to confirm the prototype using a standard network analyzer.

 

To improve the open-loop response, the simulator’s (Genesys) optimizer was used to modify component values so that the overall circuit goals were achieved. These goals included improving both the input (s11) and output (s22) match to better than -12 dB, a loaded Q of greater than 30, a gain (s21) equal to 6 dB and a phase zero crossing at 40 MHz. Each goal was given equal weighting. The results of an optimization, which took about 20 seconds, are shown below. Optimized component values were replaced with the nearest standard component values and the circuit was re-tuned manually to shift the frequency back to 40 MHz.

Rrheafig4 

This is a quick look at using the open-loop response method with the Genesys circuit simulator from Agilent EEsof EDA. The lengthier description of this design, including non-linear analysis were presented in a special webcast hosted by Agilent Technologies and Microwave Journal on September 16th, and is available for viewing here. More information on Oscillator design is available in my book, Discrete Oscillator Design: Linear, Nonlinear, Transient and Noise Domains, available from Artech House.

This is a list of questions posed after my Agilent webinar presentation: Oscillator Design with Genesys. I must say, responding to these questions brought back many memories for me, and it was a lot of fun”, Randy Rhea 16 Sept 2010

QUESTIONS REGARDING THE OPEN-LOOP MATCH

1) S11 and s22 would not be always low at any impedance......
2) Do the termination resistances need to be at least conjugates of each other in addition to being 50 Ohms?
3) Low s11 and s22: is 50 ohm needed? what impedance level? is it relative , i.e. both should be at same (or conjugate) impedance?
4) I did not catch completely your remark about the conditions (for s11, s22, ...?), which must be fulfilled that the open-loop analysis will yield realistic values. We tried several times to make an open-loop analysis of a Colpitts crystal oscillator by opening the loop between emitter and the capacitive divider, but never got zero phase or other reasonable results. My point is, that opening the loop - is loading the emitter output with a different impedance than it is with the closed loop and - is feeding the "Input" (capacitive divider) with a different source impedance than it is the case with the closed loop. Can you please explain again.
5) I agree that targeting 50 ohms for the port impedances makes it convenient to make measurements using a network analyzer. However, it seems to me that that will not necessarily result in the correct prediction of tank Q -- if the tank shunt resistance is much larger than 50 ohms, for example, connecting a 50 ohm measurement device (or simulation port) across that tank will distort the value of the Q. I haven't had a chance to absorb the implications of the Randall-Hock equation, so it's not clear whether that corrects for the error due the artificial loading introduced by the open loop measurement system.

ANSWERS: I will answer these five match questions as a group. The issue of match in the open loop cascade has always generated a lot of questions, and skepticism, in my oscillator classes. I’ll try to explain it in steps.

Point 1: While the idea of an amplifier matched to 50 ohms is easy to accept, it may be difficult to accept that a resonator consisting of only reactors can have a resistive input impedance. Consider a simple series resonator cascaded with a 50 ohm amplifier. At resonance, the reactance of the series inductor and series capacitor cancel, and the impedance seen looking into the resonator is the 50 ohms of the amplifier. The cascade input is matched to 50 ohms! So to obtain a matched open loop it is only necessary to have a matched amplifier.

Point 2: Consider slide 10 of the presentation. This is a simple, but typical, FET Colpitts oscillator. Even though a FETs input impedance is high, notice that after optimization, the cascade input return loss is 12.7 dB. The Colpitts resonator matched 50 ohms to the gate. In this case, the source impedance is naturally well matched to 50 ohms with a 17.8 dB return loss.

Point 3: Why 50 ohms? First of all, don’t become hung up on this idea. From the simulator’s standpoint, you can chose any equal and resistive termination impedance: 1 ohm, 50 ohms, 1Mohm. You simply choose the resistance that best matches the source and then use the capacitive tap to match the gate to that resistance. It is often 100 to 200 ohms for a low-frequency FET. As you go up in frequency, it tends lower in impedance. The same process is used regardless of the device, or type of resonator, ei, whether it is a Pierce, Colpitts, Clapp or whatever. You’ll be surprised how often the selected impedance can be close to 50 ohms, and that is convenient for confirmation on the bench. 50 ohms is merely convenient, not required.

Point 4: Don’t worry about matching to 20 dB return loss. It is wasted effort. In an oscillator, there is no advantage to knowing the gain margin to fractions of a dB. The simulation will be reasonably accurate with 12 dB return losses. For the simple, uncorrected S-parameters to be an accurate simulation, with a reasonably small S12, only EITHER S11 or S22 needs to be small. To understand this statement, test it against the Randall/Hoch equation.

Point 5: If you are having trouble getting a reasonable match, at any chosen reference impedance, be sure to assess stability. CB, CC, CG and CD amplifiers, typically used with Colpitts oscillators, are notoriously unstable. That is why these configurations are used in negative resistance and negative conductance oscillators. A sure sign of instability is either or both of the loop port impedances plotting outside the circumference of the Smith chart somewhere in the frequency range. If this is the case, before beginning the analysis, stabilize the device with resistance in series with the base or the emitter. Be sure to assess how this degrades the noise figure. This resistance not only makes the simulation go better, it reduces potential spurious modes in the oscillator.

Point 6: This point addresses the questions of the simulator loading the resonator. Consider again slide 10 in the presentation. After optimization, both ports are well matched. That means the source is near 50 ohms. The simulator termination was set to 50 ohms. Therefore, when the loop is closed, the source will terminate the tap with the same impedance that the simulator did!

Final Point: If the cascade does not optimize to a reasonable match, or if the gain margin is small and you need a more accurate simulation, or if you are just skeptical, then use the Randall/Hoch correction. Their G is the true open-loop gain with the cascade self-terminating. G is exact. You may measure the initial loop S-parameters with ANY reference impedance, and you may even open the loop at ANY node, and the result for G is identical, including the phase slope of G, which defines the loaded Q. Computing and displaying G in Genesys is a snap.

QUESTION: To optimize at a given frequency it is possible to put the same value for maximum and minimum. I used it in ADS and maybe this could be useful for Genesys too?

ANSWER: The attendee is remarking about the fact that the frequency range set in the optimization goals displayed in slide 9 was 39.9 to 40.1 MHz rather than 40 MHz. This is simply an old habit of mine, because digital computers use floating-point math when computing frequency points in a sweep. If the step point for 40 MHz computed as 39.99999999999999 there would be no data point at 40 MHz. For 15 years, I haven’t tested if this old habit to see if its still necessary.

QUESTION: How is the loaded Q simulated? Also, can you elaborate on the sharp phase slope required? Isn’t the idea to keep the phase as close to 0 deg as possible across freq, considering temp changes and component tolerances?

ANSWER: The loaded Q computed by Genesys is derived from the slope of the forward transmission phase. Loaded Q is also defined as the center frequency divided by the 3 dB down amplitude bandwidth of a single resonator. However, with oscillators, we are primarily concerned with the phase. Also, the phase definition is somewhat more consistent with multiple resonators.

Well yes, the objective is to keep the phase zero crossing at the desired frequency under all conditions. In fact, if we were perfectly successful in doing so, there would be absolutely no long term (drift) or short term (noise) frequency deviation.

In practice, the absolute value of the transmission phase can change with temperature, noise, supply and load changes. Examine the response of the transmission phase. Now imagine the entire curve shifts up or down in phase. If the slope is shallow, a large shift occurs in the zero crossing frequency. Now image the slope is infinitely steep. In that case, the frequency shift resulting from a phase shift is zero! Wouldn’t that be nice?! That is why a steep phase slope (which is high loaded Q) is so important.

QUESTION: kayan? a time domain simulator used within genesis?

ANSWER: Yes. In English it is spelled Cayenne. It is a hot spice named after the city in French Guiana. This is obviously a name play on the popular simulator family SPICE, originally from UC Berkeley. Cayenne is not a derivative of SPICE, but like SPICE, it is a time-step simulator. Cayenne uses the same nonlinear models that both the Genesys linear and HARBEC harmonic balance simulators use. Genesys can import SPICE models.

QUESTION: What input do you use to simulate the time-to-steady-state behavior?

ANSWER: That is a great question because it points out a strength of time-step simulation. With a real oscillator, when not powered, all voltages rest at zero. When power is applied, the rising supply voltage begins to charge the bias and resonator circuitry. A time-step simulator emulates this process with small time-step changes propagating through the circuit. If you draw your schematic and apply a power supply turn-on step, Cayenne accurately simulates the actual starting process. No “artificial” starting techniques are needed. You also have the option in Cayenne to begin at time zero at the quiescent, steady state bias conditions. You do not use this option when simulating starting.

QUESTIONS:
1) Up to what frequencies will Genesys simulate to?
2) During simulation, how accurate is just laying the copper (coplanar WG) on the schematic compare vs running the EM simulation?

ANSWER: The short answer is up to any frequency where the components are described with electrical models. In practice, the answer to this question is very complex. The difficulty is with the models. Genesys includes all well published models for lumped and distributed passive components, and these same models are used in all commercial simulators on the market. The answer also depends on the details. For example, the basic model for an inductor with a 1 cm diameter may fail at 100 MHz, while the same model for a 0.1 mm diameter inductor may work well through 10 GHz. To successfully use any simulator, and Genesys is no different, the engineer must be willing to study components (and a simulator helps here also) before building a circuit. Nevertheless, up to about 500 MHz, except for some filters, standard simulation techniques should suffice.

Above 500 MHz, the successful engineer will spend as much time characterizing components as he does simulating his circuits. Above about 2 GHz (again depending on the specific circuit), electromagnetic simulation (EM) should be added to the toolbox. Another factor influencing whether EM simulation is required is the distributed process used. For example, there are well-engineered circuit-theory models for numerous microstrip objects. On the other hand, the models available for coplanar objects are more limited. There may be no models for less common, special layer configurations. In this case, EM simulation is mandatory.

QUESTION:
1) Does the presenter have recommendations for designing for good phase noise besides have a good resonator Q?
2) Can you describe criteria for selecting active device used in oscillators for optimized phase noise?
3) How important for lowest phase noise is it to provide a good noise match in the open loop cascade? Can the simulator output an estimate of the noise figure for inclusion in the Leeson equation?

ANSWER: State of the art in oscillator design often involves phase noise. Scores of manufacturers of crystal and microwave oscillators have years of in-the-trenches experience in achieving the ultimate in phase noise. It is hard to describe how minute a perturbation can cause noise at –170 dB or more below the carrier. In my first year as an engineer, I battled phase noise in a magnetron for days until I discovered it was the fan-bearing noise of a spectrum analyzer on the bench vibrating the magnetron. Ten years later, I battled a phase noise issue until I discovered that a florescent light was modulating an IC through its black plastic package. I have only a few paragraphs, but I’ll describe some key points. The book devotes Chapter 4 to the topic, and even then, it certainly isn’t comprehensive.

Yes, the most important parameter is the loaded Q, because phase noise improves with the square of loaded Q. It is important to recognize the difference between loaded and unloaded Q. Unless the loaded Q is very high, improving the component (unloaded Q) will have little effect on performance.

Next in importance is the oscillator power level. Since the noise is a fixed level, increasing the carrier level improves noise in relation to the carrier, in dBc. Higher power resulting in lower noise might seem counterintuitive, and it often generates as much skepticism as the open loop cascade match issue. However, increasing oscillator output power improves phase noise, almost linearly, and there are several measured data examples in the book. In most active devices, increasing the current and power level degrades the noise figure to a degree, but not nearly as rapidly as it improves the phase noise. Unfortunately, increased output power requires increased device current, which is often not acceptable for battery-powered applications. Amplifying the signal after the intrinsic oscillation process is of no benefit because it amplifies both the noise and the carrier. Also, operation above a few milliwatts may crack a quartz crystal, so higher power in not an option in crystal oscillators.

Third in importance is device selection. While improving the noise figure makes common sense, today the difference in the noise figure of a state of the art device and an inexpensive one is 1 or 2 dB. One thing that is important about device selection is flicker noise. Bipolar and FET devices typically have low flicker noise and it doesn’t matter which is chosen. However, GaAs devices have horrible flicker noise below 1 MHz. While this is not an issue in high frequency amplifiers, it is terrible for oscillators, because this low frequency noise becomes upconverted around the carrier.

Yes, to both parts of the final question: it is a good idea to worry about the impedance presented to the amplifier with regard to the effect on noise figure, and the noise figure of an amplifier predicted by Genesys is available for use Leeson’s equation. However, keep in mind, a 1 or 2 dB improvement in noise figure may be minor in relation to focusing the design on higher loaded Q and power level.

QUESTION: I have a question about wideband design and low noise. I'm interested in converting my existing distributed design to use a coupled resonator. I can see that the Q can be doubled in theory and lumped simulations when coupling two identical resonators together but how when simulating distributed coupled resonators (coupled lines for instance) in Momentum the Q is not improved. How do you go about simulating this with Genesys, ADS etc.?

ANSWER: For a single pole resonator, the loaded Q, group delay and 3 dB amplitude bandwidth are related by simple equations given in the book. Higher loaded Q is achieved with narrower bandwidth, which is achieved by more extreme reactor values, or better, by resonator coupling. For a given unloaded Q, increasing the loaded Q unfortunately increases the resonator insertion loss. This is what limits loaded Q (resonator voltage overdriving a varactor may also limit the loaded Q). Too much loaded Q and the active device can’t overcome the resonator loss.

It is true, that given a maximum acceptable loss, say 10 dB, that multiple resonators can achieve a steeper phase slope for a given component (unloaded) Q. Two resonators don’t double the loaded Q for a given loss, its more like a factor of 1.4. With three resonators, its around 1.5. This is easily explored using a linear simulator. This should improve oscillator phase noise performance. Whether it does, and what the “shape” of the phase noise curve is with respect to offset frequency, needs further investigation. I would be interested to hear about what you learn regarding this.

QUESTION: From phase noise perspective, what do you think of using DC supply voltage regulators?

When the supply voltage changes, the bias and phase shift of the amplifier will change. If the supply voltage change is noise, then noise is modulated into the oscillator. This is easily quantifiable and predictable and equations are given in the book. Higher loaded Q reduces this problem as well.

Therefore, it is important to have a well-regulated supply to the oscillator. You can test if supply noise is an issue by temporarily replacing the supply with a battery. Chemical batteries have extremely low noise voltage.

The noise properties of IC voltage regulators vary widely. Once, while observing oscillator phase noise during a spectrum analyzer sweep, I noticed that the phase noise erratically stepped up or down by 10 dB, over a period of a few minutes. A can of spray coolant quickly isolated the problem to a small, plastic voltage regulator. Since then, I tend to use simple RC filtering for low power oscillators that don’t draw much current, or discrete voltage regulators in higher power oscillators. If I were to use an IC voltage regulator, I would at least use a high quality regulator with controlled and specified noise performance.

 

 

 

 

 

 

 

Feel free to post questions and comments here.

 



Other Horizon House Sites:

Microwave Journal Online: Home | Current Issue | News | Buyer's Guide | Events | Resources | Archives | Subscriptions | Privacy Policy

Advertiser Information:
2009 Media Planner

Find out why more companies advertise in Microwave Journal than any other publication in the industry.

Read More >>

Microwave Journal
Editorial Information

Editorial Planning Guide and Information for Authors

Read More >>


©2009 Microwave Journal & Horizon House Publications ® All rights reserved.