If you wish to contribute or participate in the discussions about articles you are invited to contact the Editor
Majority Signal Voting: Difference between revisions
Carlos.Lopez (talk | contribs) (Created page with "{{Article Infobox2 |Category=Fundamentals |Title={{PAGENAME}} |Authors= J.A Ávila Rodríguez, University FAF Munich, Germany. |Level=Advanced |YearOfPublication=2011 }} == Hist...") |
Carlos.Lopez (talk | contribs) No edit summary |
||
Line 1: | Line 1: | ||
{{Article Infobox2 | {{Article Infobox2 | ||
|Category=Fundamentals | |Category=Fundamentals | ||
|Authors=J.A Ávila Rodríguez, University FAF Munich, Germany. | |||
|Authors= J.A Ávila Rodríguez, University FAF Munich, Germany. | |||
|Level=Advanced | |Level=Advanced | ||
|YearOfPublication=2011 | |YearOfPublication=2011 | ||
|Title={{PAGENAME}} | |||
}} | }} | ||
== History of Majority Voting == | == History of Majority Voting == | ||
Line 22: | Line 21: | ||
To conclude, it is of interest to mention that another highly desirable property of the subject multiplex method is its transparency to the receiver equipment in the sense that this does not need to care about how the multiplex of the different signals looks like. | To conclude, it is of interest to mention that another highly desirable property of the subject multiplex method is its transparency to the receiver equipment in the sense that this does not need to care about how the multiplex of the different signals looks like. | ||
== Definition of Majority Voting == | == Definition of Majority Voting == | ||
Line 36: | Line 36: | ||
If we think about the functioning of the majority logic, we can see that the majority combination rule is equivalent to computing the numerical sum of the code chips and taking its algebraic sign as shown by [J.J. Spilker Jr. and R.S. Orr, 1998]<ref name="JJS_1998"/>. Indeed, for the combination of three binary codes <math>\left(c_1,c_2 and c_3\right)</math> the majority code,<math>c_{maj}</math>, can be as: | If we think about the functioning of the majority logic, we can see that the majority combination rule is equivalent to computing the numerical sum of the code chips and taking its algebraic sign as shown by [J.J. Spilker Jr. and R.S. Orr, 1998]<ref name="JJS_1998"/>. Indeed, for the combination of three binary codes <math>\left(c_1,c_2 and c_3\right)</math> the majority code,<math>c_{maj}</math>, can be as: | ||
::[[File:MSV_Eq_1.png|none| | ::[[File:MSV_Eq_1.png|none|360px]] | ||
Revision as of 11:55, 8 November 2011
Fundamentals | |
---|---|
Title | Majority Signal Voting |
Author(s) | J.A Ávila Rodríguez, University FAF Munich, Germany. |
Level | Advanced |
Year of Publication | 2011 |
History of Majority Voting
The majority combining technique dates back to those days when communication engineers relied on increased power levels and redundancy to improve the reliability of a communication link. The basic form of redundancy consisted in transmitting each data symbol an odd number of times, demodulating each symbol individually and deciding in favour of the symbol value that occurred more frequently [R. S. Orr and B. Veytsman, 2002][1] . In fact, this is the simplest implementation of the majority vote multiplex as we will discuss in the next chapters. It is important to note however that while the majority voting that we will describe in the following chapters is realized at the transmitter, the combination that we referred to in the previous lines is carried out at the receiver.
The type of redundancy that we have mentioned has long been introduced in most of the digital circuits today. As a good example of it the Triple Modular Redundancy (TMR) is a standard design practice in systems where stringent availability and tolerance are required. Other systems with even higher requirements such as manned space missions use accordingly a higher level of redundancy.
As shown by [R. S. Orr and B. Veytsman, 2002][1], another field where majority voting has attracted the interest of researchers in the past has been that of using the combination of binary codes for ranging applications. References dating to as early as 1962 can be found in works from [M. F. Easterling, 1962][2], [D.J. Braverman, 1963] [3], [R.C. Tausworthe, 1971] [4] and [J.J. Spilker, 1977][5]. These works describe how long codes with particularly well selected properties can be developed on the basis of shorter codes that are combined in an intelligent way. As it is well known, long codes are desirable to obtain good auto- and cross-correlation properties. However, for acquisition, shorter codes are preferred to accelerate the process. Majority voting provides an efficient way to multiplex several short codes into a long code with good properties. The interest of this technique is that although the code presents a majority voted length that is in general by far longer than that of the individual codes it consists of, there exists a substructure that can be used to quickly acquire one of the codes. This principle is in fact described by [M. F. Easterling, 1962] [2]. In this work it is shown how several Pseudo Random Noise (PRN) sequences with prime periods can be majority voted to form a code with period the product of the individual components. This longer code presents improved correlation characteristics but still preserves the substructure of the individual codes of shorter length, facilitating thus the acquisition.
It seems that, as shown in [A.R. Pratt and J.I.R. Owen, 2005] [6], the majority voting multiplexing technique has not been implemented yet in any real navigation system. Nonetheless we can find patents where this technique is employed combined with more sophisticated schemes such as the Interplex [G.L. Cangiani et al., 2004][7]. Majority voting could play indeed an important role in the future, not only for navigation, but also for terrestrial networks. The transmission of voice and data at higher rates than those that are possible today could be reality someday. As described by [R. S. Orr and B. Veytsman, 2002] [1], the idea would be to transmit more than one code per service in such a way that different code channels could be assigned to different functions such as pilot, paging, synchronization, control and traffic. In addition, different power allocations could be assigned to different services to avoid the dominance of one or a few.
In addition, not only more services or channels could be transmitted multiplexed by the majority voted signal. In fact, different codes could be used too to transmit the same one service, enabling the operation of this service at higher data rates by splitting its data across the different codes as shown by [R. S. Orr and B. Veytsman, 2002][1]. Indeed, if N codes are used to transmit the same service, each code could carry part of the data message and the total data rate would increase. Of course care has to be taken in making the code sufficiently long. The reason for this is that by having several codes running in parallel multiplexed within the majority voted signal, each code will suffer a slight degradation that will make the demodulation more complicated. However, this is by far compensated by the increase of the data rate that can be achieved and by the fact that the correlation losses of any individual channel are limited even when the number of signals to multiplex increases.
Another interesting application that derives from the previous discussion could be the use of majority voting to transmit different codes with different lengths from the same satellite. The different codes could have prime lengths and would be selected in such a way that they would be optimum to serve specific applications. One can think, for example, of an indoor code, an urban-canyon code, codes with good acquisition properties or with good tracking characteristics. They would all be sent from the same satellite in an unique majority voted code. From the receiver point of view, the particular user would only have to care about the particular family of codes of interest being the rest of codes sent in the majority voted signal invisible to him. For example, indoor receivers would have to correlate in the receiver with the particular indoor codes of the constellation. These should be optimized in terms of correlation properties. In the end, an indoor receiver working on indoor codes would not see the effect of the other codes transmitted on the same satellite, except for a correlation loss of never more than 1.96 dB as we will show next.
To conclude, it is of interest to mention that another highly desirable property of the subject multiplex method is its transparency to the receiver equipment in the sense that this does not need to care about how the multiplex of the different signals looks like.
Definition of Majority Voting
The Majority Voting modulation, also known as Majority Combining, is a constant-envelope multiplex technique based on majority-vote logic [J.J. Spilker Jr. and R.S. Orr, 1998][8]. The majority vote approach is basically a time-multiplexing of either the I or Q phases (scalar) or of both of them (vectorial) at the same time, where multiple signals are transmitted in a single constant envelope. The basic idea is that the time-multiplexed signal to transmit is selected following a particular logic based on the input signals. In its simplest form, namely the uniform weighting scalar distribution or equal weighting, the number of signals to multiplex must be odd to ensure majority in all possible cases. In this approach, the majority vote logic produces a multiplex where each component signal is equally weighted. In its most general form, namely the Generalized Majority Voting (GMV), any odd and even number of signals can be multiplexed in principle with any possible weighting.
Majority voting is a non-linear multiplex technique that provides a convenient and flexible method to multiplex several signals into one constant envelope without multiplexing losses [J.J. Spilker Jr. and R.S. Orr, 1998] [8]. Moreover, it elegantly circumvents the peak versus average power trade that the lossless linear superposition presents when applied through a common aperture (see Linear Modulation (Spatial Combining)). The Majority Voting technique is also of particular interest to secure acquisition of codes such as the M-Code where the insertion of particularly well selected sequences would accelerate its detection. In the next chapters, the true relevance of majority voting will be underlined by comparing this multiplexing scheme with other better known techniques.
Theory on Majority Voting
Let an odd number of binary spread spectrum codes be multiplexed as proposed by [J.J. Spilker Jr. and R.S. Orr, 1998][8]. Majority logic operates on the principle that at a given time point the value to transmit is that of the majority of the codes. For this reason, the number of component codes must be odd. According to this, if the codes share a common chip rate, the majority voting operation will be done once per chip, while for the case that the rates differ, the majority combination will occur at their least common multiple.
If we think about the functioning of the majority logic, we can see that the majority combination rule is equivalent to computing the numerical sum of the code chips and taking its algebraic sign as shown by [J.J. Spilker Jr. and R.S. Orr, 1998][8]. Indeed, for the combination of three binary codes [math]\displaystyle{ \left(c_1,c_2 and c_3\right) }[/math] the majority code,[math]\displaystyle{ c_{maj} }[/math], can be as:
Of course similar expressions can be derived for more codes, but the complexity increases with the number of signals to multiplex. Furthermore, it is interesting to note that the previous equation can be used to derive the autocorrelation function of the majority voted signal and correspondingly the total spectrum as a function of the individual PSDs.
The case that we have described in the previous lines is the simplest implementation of the majority voting logic. A generalization can be easily accomplished by means of interlacing, which is the insertion of chips of one or more of the component codes into the output chip stream as replacement for the corresponding majority chips as explained by [J.J. Spilker Jr. and R.S. Orr, 1998] [8] . Interlacing is an intelligent way to achieve non-uniform effective power distribution among the codes as we will see in the next chapters.
Majority Voting: Scalar Combination with Uniform Weighting
Let us assume, as we also did in previous lines, that we want to multiplex an odd number of 2N+1 statistically independent binary signals using majority logic. Furthermore, let us assume that the codes are statistically balanced, so that the chip values can be modelled as independent, identically distributed binary random variables. According to this, the majority voted signal that will result of multiplexing the 2N+1 individual signals can be expressed as:
where the Majority operator Maj indicates the sign of the majority of the signals. This signal receives the name of majority voted signal and is the one that will be transmitted instead of the 2N+1 signals. As one can imagine, in order the correlation losses not to be very high, the majority vote signal should somehow represent each of the 2N+1 individual signals that form it. To measure how true this assumption is, the correlation between the majority voted signal [math]\displaystyle{ c_{Maj} }[/math] and a particular reference code has to be calculated.
The result of a single chip correlation, denoted [math]\displaystyle{ \chi }[/math] in the following lines, equals +1 or -1 depending on the coincidence of the majority voting chip and the replica chip of the particular code we correlate with, assuming perfect alignment. In fact, the majority chip matches the reference chip (thus [math]\displaystyle{ \chi = +1 }[/math]) if and only if at least N chips from the other remaining 2N codes also match it [J.J. Spilker Jr. and R.S. Orr, 1998] [8]. Otherwise the correlation will be -1. According to this, the average correlation between any particular code [math]\displaystyle{ c_i }[/math] and the majority voting signal [math]\displaystyle{ c_Maj }[/math] will be
what can also be expressed as follows:
We assume that
Furthermore, it is trivial to see that
where [math]\displaystyle{ p_N^2N\left(+1\right) }[/math] is the probability that exactly N codes out of 2N adopt the value +1. As it can be shown, this probability is also equal to that of having N codes out of 2N with value -1 and adopts the following form:
being thus the following identity true:
In the same manner, it can be shown that:
(7.15)
If we further develop (4), we can see that the mean correlation [math]\displaystyle{ \hat{\chi} }[/math] between any particular code [math]\displaystyle{ c_i }[/math] and the majority voting code [math]\displaystyle{ c_{Maj} }[/math] simplifies to:
or equivalently
This expression can be further simplified using an approximation based on the Stirling bounds of the factorial function as shown by [W. Feller, 1957][9]:
which is a good approximation even for low values of N.
If we normalize now the amplitude of the majority code to the summed power of the component codes, that is [math]\displaystyle{ \sqrt{2N+1} }[/math], the normalized mean correlation [math]\displaystyle{ \hat{p} }[/math] between any particular code [math]\displaystyle{ c_i }[/math] and the majority voting code [math]\displaystyle{ c_{Maj} }[/math] will be
The problem of this implementation of the majority voting is that since all the signals are equally weighted in power, there appear relatively large majority combining losses per signal, resulting in relatively poor overall power efficiencies. In fact, for the case of three transmitted signals, the majority vote multiplexing is shown to result in a 1.25 dB multiplexing loss (what corresponds to an efficiency of approximately 75 %). It is important to keep in mind that when three signal are majority voted, N adopts the value 1 in the previous equations ( ). Next table exemplifies the possible chip combinations for majority combining of three codes and the correlation between each of the three codes and the majority voted code
As we can see, the unnormalized mean correlation between any particular code and the majority voting code adopts the value + in 18 cases and – in 6 cases. Probabilistically speaking, this implies that:
so that the unnormalized mean correlation will be [math]\displaystyle{ \hat{\chi}=1/2 }[/math], resulting in an apparent loss of 6.02 dB. If we consider now the power of the three codes, the total loss will be [math]\displaystyle{ \hat{\rho}=\sqrt{3}/2 }[/math] which corresponds to the 1.25 dB we mentioned some lines above. This result coincides perfectly with the predictions of the theory developed in previous equations.
The correlation power loss factor L(N) is defined by [J.J. Spilker Jr. and R.S. Orr, 1998] [8] as the fraction of power of any code in the majority voted signal measured at the correlator output and is shown to adopt the following form:
or expressed in dB,
It is interesting to analyze this equation when the number of equal-weighted inputs increases. In fact, when [math]\displaystyle{ N\to\infty }[/math], (13) shows that the achievable per-code correlation asymptotically approaches [math]\displaystyle{ \sqrt{2/\pi} }[/math], so that the correlation losses will increase as the number of signals to multiplex increases, but will never be higher than 1.96 dB. In fact, when the receiver performs a correlation among all the possibilities, some of the received chips will be wrong but limited in number according to the above derived expressions. The following figure shows the losses as a function of the number of multiplexed signals. To compute the curve, the exact formula of the majority losses was employed. However, the difference with respect to the Stirling approximation is minimum even for low numbers of signals.
Another important drawback of this simple implementation of the majority vote multiplexing is that it is difficult to control the relative power levels between the different multiplexed signals without incurring in additional losses as shown in [P.A. Dafesh et al., 2006][10]. Indeed, in the previous derivations all codes or signals are assigned the same power levels. Furthermore, this multiplexing technique does not provide sufficient spectral separation and has limited inherent flexibility in adjusting the amplitude of generated harmonics, being all these great disadvantages.
A way to achieve an arbitrary weighting of the power of the signals to multiplex is using a statistical mix of majority vote rules operating on appropriately chosen subsets of the input chips of each signal [P.A. Dafesh et al., 2006] [10]. Here, the power distribution is realised playing with the relative frequency of use of the various majority vote rules. As we can imagine, a particular power distribution can be accomplished with different majority vote rules and thus the optimum of all the possible solutions will be that one with the smallest multiplexing losses. This will be further clarified in the next chapters.
To conclude this chapter it is important to mention that the correlation loss can also be interpreted as the additional fraction of transmit power [math]\displaystyle{ \delta P }[/math] that is required to neutralize the receiver performance loss, so that it can be defined as follows:
Majority Voting: Scalar Combination with Non-Uniform Weighting
In the previous chapter we have analyzed the simplest realization of the majority voting technique, namely the uniformly weighted version, where all the signals contribute with the same power to the majority voted signal. As we saw there, this solution presents limitations with respect to the flexibility to meet the power requirements in normal GNSS systems. To cope with this, this chapter describes the non-uniform solution.
The non-uniform weighting can be likened to shareholder voting as graphically described by [R. S. Orr and B. Veytsman, 2002][1]. In fact, based on a targeted power allocation, each of the signals to multiplex is allocated a number of votes, which may be fractional in the most general case. Then, at each chip epoch, the transmitted majority voted value is selected by taking the sign of the sum of the weighted chips of each individual code. Without loss of generality, the codes or channels are assumed to be binary. This is shown in the expression next proposed by [R. S. Orr and B. Veytsman, 2002][1]:
where [math]\displaystyle{ \lambda_i }[/math] is the number of votes allocated to the i-th of the N signals, represents the chip value of the i-th signal and [math]\displaystyle{ c_{Maj} }[/math] is the majority voted chip value. As we can see in the expression above, this generalized form of majority voting also includes the particular case of (2) where all the weights are equal. Moreover, it is easy to recognize that this adaptation of majority logic enables a constant-envelope multiplexing of an arbitrary distribution of chip-synchronous CDMA signals. In addition, there is no constraint on the number of signals that can be multiplexed so that also an even number of codes could be majority voted using this general approach. One final comment on the previous equation is that the weighting factors must be selected in such a way that the summation is different than zero at any time.
As we can read from (18), the key to achieve an efficient multiplexing is the correct selection of the weighting factors so that the resulting composite signal does indeed reflect the desired power distribution among the various user signals. As shown by [R. S. Orr and B. Veytsman, 2002][1], if all the chips of the signals to multiplex were weighted as in a linear multiplexer in proportion to the square root of their power allocation, (18) would not reflect in general the desired power distribution. In fact, those signals with small amounts of power could become suppressed relative to more powerful codes.
Let us assume that we want to majority vote two signals with power ratios 20 % and 80 % of the total power respectively. This means that one signal will be four times stronger than the other one. As we mentioned in the previous lines, a linear multiplexer would assign coefficients 1 and 2 to the weak and strong signals according to:
If we show now the possible chip combinations of this scheme and the correlation between each of the two codes and the majority voted code, we have:
As we can recognize, while the mean correlation of code 1 with the majority voting signal is zero, code 2 presents a perfect correlation of 100 %.
As we can see, the weakest signal is not reflected at all in the majority voted signal at the end as the information from code 1 has gone lost in the majority voted signal. This small signal suppression or capture is a well-known result of non-linear signal processing operation and reflects indeed the fact that no coalition of minority stockholders can ever outvote a 51 % majority interest as graphically expressed by [R. S. Orr and B. Veytsman, 2002] [1].
With the previous example, we have demonstrated that a faithful representation of a commanded power distribution cannot be achieved in the most general case using a linear multiplexer. Indeed, the signal weight cannot be the square root of the power allocation unless the signals to multiplex were Gaussian distributed.
[R. S. Orr and B. Veytsman, 2002] [1] have derived a set of equations that give an elegant solution to this problem. According to the algorithm presented in their work, the cross-correlation between the majority voted code and a particular component code is constraint by a set of equations in such as way that the power allocated to the particular signal is equal to the square of the value of the corresponding correlation between this signal and the majority vote signal. In addition, a coefficient of proportionality is also introduced in the model in order to control the efficiency or multiplexing losses common to each code. The solution of the equations provides the appropriate weighting of each signal maximizing the efficiency for the desired power ratio. The model is extremely non-linear and therefore, to minimize the resolution of the algorithm, [R. S. Orr and B. Veytsman, 2002] [1] propose to assign each of the component codes to one of two groups, designated as Gaussian and non-Gaussian.
The components assigned to the Gaussian group G are typically small in power but numerous in number. As one can imagine, the division between Gaussian and not Gaussian (NG) is not always so straightforward. Normally, the criterion to define a group of signals as Gaussian is that the weighted sum of their chips, with the weight being proportional to the square root of the power allocation, will have a power that is less than a specified fraction of the total power, typically 5 % to 10 %. It must be underlined that, in spite of existing well defined statistical tests to decide on the Gaussianness of a group of signals, the determination of this decision threshold is relatively flexible and up to the designer. Indeed, the threshold value for this test is a parameter that permits some flexibility. [R. S. Orr and B. Veytsman, 2002] [1] have shown that still in cases where the Gaussian group does not ideally behave as it should according to theory, the algorithm delivers good solutions.
Taking (18), the majority voted signal will be formed in this case as follows:
where refers to the Gaussian group of signals, to the non-Gaussian group and [math]\displaystyle{ N^G }[/math] and [math]\displaystyle{ N^{NG} }[/math] are respectively the number of Gaussian and non-Gaussian signals.
The commanded power distribution is described by a set of non-decreasing ratios [math]\displaystyle{ \left \{ R_i \right \} }[/math] with [math]\displaystyle{ 0 \le i \le N^{NG} }[/math], where the lowest ratio,[math]\displaystyle{ R_0 }[/math], describes the power of the Gaussian group and is normalized to 1 as shown by [R. S. Orr and B. Veytsman, 2002] [1]. Accordingly, the remaining signals [math]\displaystyle{ 1 \le i \le N^{NG} }[/math] will represent the non-Gaussian group. Following this notation, [math]\displaystyle{ R_i }[/math] would indicate that the non-Gaussian signal [math]\displaystyle{ c_i }[/math] has a power [math]\displaystyle{ R_i }[/math] times that of the Gaussian group.
The Gaussian group signals have assigned weighting factors that are equal to the root square of the power allocation. Therefore, if all the [math]\displaystyle{ N^G }[/math] signals had the same power and given that the whole power of the Gaussian group is normalized to unity, each code of the group would be allocated a power [math]\displaystyle{ 1/N^G }[/math] being consequently the weighting factor of all the signals in the Gaussian group [math]\displaystyle{ 1/\sqrt{N^G} }[/math]. The composite Gaussian group [math]\displaystyle{ S^G }[/math] is normalized to have power 1 and mean zero being therefore its probability density function defined as follows.
Since in the next lines the probability of [math]\displaystyle{ S^G }[/math] to be in an arbitrary region [math]\displaystyle{ - x \lt S^G \lt x }[/math] will appear relatively often, it is worth to recall the value of this probability
This result can be further simplified, at least regarding the notation, if we express it in terms of the mathematical error function, which is defined as follows:
According to this, the probability that the Gaussian group variable [math]\displaystyle{ S^G }[/math] is between –x and x can also be expressed as follows:
As we have emphasized in previous lines, the collective power ratio R0 of the Gaussian signal codes must be unity. This means that the commanded distribution power allocations of the Gaussian signals [math]\displaystyle{ P_1^G,P_2^G,\cdots,P_{N^G}^G }[/math] must be normalized as follows:
and the corresponding weighting coefficients will thus adopt the following form:
For the non-Gaussian group codes, the power ratios of the [math]\displaystyle{ N^{NG} }[/math] non-Gaussian signal codes are equally determined as shown next [R. S. Orr and B. Veytsman, 2002] [1]:
As we can see, the ratio [math]\displaystyle{ R_i }[/math] indicates the relative power between the non-Gaussian code i and the power of the Gaussian group.
We have to derive now the correlation equations to find the optimum weighting factors. Indeed, the power allocated to each non-Gaussian signal should be the square of the correlation between that code and the majority voted signal.
Since each chip in the multiplex can adopt two values, there is a total of [math]\displaystyle{ 2^{N^{NG}} }[/math] possibilities at any time for the [math]\displaystyle{ N^{NG} }[/math] non-Gaussian chips [R. S. Orr and B. Veytsman, 2002] [1]. Let us define [math]\displaystyle{ \hat{c}^{NG} }[/math] as a combination of [math]\displaystyle{ N^{NG} }[/math] non-Gaussian chips such that
and [math]\displaystyle{ S_i^{NG}\left(\hat{c}^{NG}\right) }[/math] equals (28) except for the exclusion of the i-th chip,
If we recall (18), the Majority Vote (MV) signal in its general form is shown to be:
If we have a look at a particular non-Gaussian code [math]\displaystyle{ c_i^{NG} }[/math], the value of the sum [math]\displaystyle{ S^G + S^{NG}\left(\hat{c}^{NG}\right) }[/math] can adopt the two following values:
being thus the correlation between the replica desired signal [math]\displaystyle{ c_i^{NG} }[/math] and [math]\displaystyle{ c_{Maj} }[/math] as follows:
where [math]\displaystyle{ p\left \lfloor S_i^{NG}\left(\hat{c}^{NG}\right) \right \rfloor }[/math] indicates the probability that the Non-Gaussian sum adopts a particular value determined by [math]\displaystyle{ \hat{c}^{NG} }[/math]. Furthermore, the sign function is defined as:
It is important to note, that the coefficients must be selected such that the sum of all the weighted signals is never zero. Moreover, the probability to have a specific combination of Non-Gaussian codes is shown to adopt the following form:
If we further develop (32), we have a mean correlation:
which can also be expressed as:
If we have a look now at the next figure representing the probability density function of the Gaussian group,
it is clear to recognize that
and
Therefore, the mean correlation can be simplified to:
The expression derived above corresponds to a particular combination of [math]\displaystyle{ N^{NG}-1 }[/math] non-Gaussian chips [math]\displaystyle{ \hat{c_i}^{NG} }[/math] where all non-Gaussian codes, except for code [math]\displaystyle{ \hat{c_i}^{NG} }[/math], were considered. To extend the result to all the code combinations and have thus the mean correlation for any component signal, we only have to sum over[math]\displaystyle{ \hat{c_i}^{NG} }[/math] as shown next:
This expression can be further simplified if we recall that integrating (39) over [math]\displaystyle{ \hat{c_i}^{NG} }[/math] and [math]\displaystyle{ \hat{c}^{NG} }[/math] (that is over all code combinations including that of code i) will be similar except for a factor 2. In fact, it can be shown that
As we can see, the last line of the previous equation integrates over [math]\displaystyle{ \hat{c}^{NG} }[/math] and not [math]\displaystyle{ \hat{c_i}^{NG} }[/math].
In the same manner, the correlation between the Gaussian group and the majority voted signal can be approximated as follows:
An example confirming the validity of the previous expression is briefly depicted in the next lines. Let us imagine that our majority voting signal adopts the following form:
As we can recognize, the majority voted multiplexed signal consists of the Gaussian group and two non-Gaussian codes [math]\displaystyle{ c_1 }[/math] and [math]\displaystyle{ c_2 }[/math] weighted with 5 and 10 respectively. We analyze next all possible cases: For the particular combination [math]\displaystyle{ \left(c_1,c_2\right)=\left(-1,-1\right) }[/math], the mean value of the sum of the Gaussian and non-Gaussian signals [math]\displaystyle{ S^G+S^{NG} }[/math] will be -15 adopting the probability density function the following form:
where the area in red indicates the value of the mean correlation for this particular combination of non-Gaussian codes. Indeed, the mean correlation in this case is shown to adopt the following value:
where This can also be expressed as a function of the weightings:
As one can observe, this is the unnormalized correlation between the Gaussian group and the majority voted signal for this particular combination of non-Gaussian codes. To normalize the expression, we only have to multiply by [math]\displaystyle{ \left \Vert +\lambda_1 + \lambda_2 \right \| }[/math] resulting thus
In the same manner, for the combination of non-Gaussian codes [math]\displaystyle{ \left(c_1,c_2\right)=\left(-1,+1\right) }[/math], the mean value of the sum of the Gaussian and non-Gaussian signals [math]\displaystyle{ S^G+S^{NG} }[/math] will be 5 in this case, yielding the mean unnormalized correlation:
or again for any arbitrary two weighting factors:
so that the normalized expression will be:
In the same manner, the mean normalized correlation for the code combination [math]\displaystyle{ \left(c_1,c_2\right)=\left(+1,-1\right) }[/math] will be:
and for [math]\displaystyle{ \left(c_1,c_2\right)=\left(+1,+1\right) }[/math],
Grouping now all the previous normalized correlations, the mean value will be then:
where [math]\displaystyle{ S^{NG} = 5c_1 + 10c_2 }[/math] in this particular example. Moreover, the erfc function can be well approximated as follows when the argument is higher than 3 (as it is the case in Majority Vote combinations) [M. Abramovitz and I.A. Stegun, 1965] [11]:
For our particular case, this implies that the correlation between the Gaussian group and the majority voted signal can be approximated for any generic non-Gaussian code combination as follows:
which is the expression presented in (42).
Dividing now (41) by (54) and squaring, the power ratio of code [math]\displaystyle{ c_i^{NG} }[/math] will adopt the following form:
which coincides with the formula derived by [R. S. Orr and B. Veytsman, 2002] [1]. Once we have the ratio of the power of any code [math]\displaystyle{ c_i^{NG} }[/math] of the non-Gaussian group with respect to the Gaussian group, the power loss factor of all the signals multiplexed can be expressed in terms of the losses of the Gaussian group multiplied by the total power of the signal since all the power ratios are normalized to the power of the Gaussian group:
or simplified:
which also coincides with the expression derived by [R. S. Orr and B. Veytsman, 2002] [1].
Once we have derived the general expressions for the losses of the majority vote multiplex and thus the efficiency of the modulation for a targeted power distribution, the next step is to find the optimum weighting factors. This problem is basically a minimization exercise that consists in finding the weighting factors that minimize the total losses subject to the envisaged power division between the different signals. This can be expressed as follows:
where
and
[R. S. Orr and B. Veytsman, 2002] [1] have proposed an efficient way to solve this problem where the set of weighting factors [math]\displaystyle{ \left\{\lambda_i^{NG}\right\} }[/math] is efficiently calculated.
Generalized Majority Voting (GMV): Cyclostationary Solutions
In the previous chapter different majority vote solutions have been derived for the case that the weighting rule applied to each instantaneous set of component channels is constant over time. Furthermore, theory was presented to derive the weighting factors required to obtain a desired power distribution. However, as shown by [R. S. Orr and B. Veytsman, 2002] [1], not always the targeted power allocation of the different services can be accomplished on the basis of an stationary approach and a cyclostationary solution is required then. In this case, the weighted majority voting rules exhibit time variation varying the weighting coefficients over time. The time variation is applied periodically over the largest available processing interval as shown by [R. S. Orr and B. Veytsman, 2002] [1], being this interval normally the shortest data symbol of any of the component codes of the multiplexing. The cyclostationary power allocation can be further tuned by averaging different weighting schemes over time.
Generalized Majority Voting (GMV): Sub-Majority Voting
As shown by [R. S. Orr and B. Veytsman, 2002][1], the stationary solutions that result from applying the theory of previous chapters to the commanded powers of the majority voted signal present a quantized behaviour in the sense that the achieved gains do not automatically change when sufficiently small changes in the vote allocation are realized. Indeed, a break point only occurs when a slight change in the vote allocation permits some coalition of votes to dominate in a situation when they previously could not. In this sense, if an accurate allocation of the powers on the different signals is required, this can only be achieved on the basis of a cyclostationary solution as introduced in the previous lines.
Based on this idea, a simple way of decreasing the effective power allocated to a particular channel is to omit this code from a certain number of majority votes resulting in the so-called sub-majority voting. When this occurs, certain codes or signals do not participate in the sub-majority voting, changing thus the weightings of the different codes or channels over the time (thus a cyclostationary solution). It must be noted that while the weighting might change relatively often, it will however be constant for a relatively long period of time, in the order of the length of a bit. This can also be understood as time-multiplexing different signals according to a predefined scheme. In the following lines we analyze the implementation proposed by [G. L. Cangiani et al., 2002] [12]. To illustrate the functioning of this cyclostationary solution, we take as an example the case when three signal codes are majority voted.
As shown by [G. L. Cangiani et al., 2002][12], when three signals are combined using Generalized Majority Voting (GMV) on a sub-majority voting basis, there are four possible elements to consider: the majority vote of the three chips and the three individual chips themselves. As it can be demonstrated, if one of the codes is smaller than the other two, the transmission of solo chips from that code will never result in an efficient solution. According to this, if the targeted power distribution is [math]\displaystyle{ \left\{G_1,G_2,G_3\right\} }[/math], being the gains listed in non-decreasing order, the majority vote [math]\displaystyle{ Maj\left\{c_1,c_2,c_3\right\} }[/math] and the solo chips [math]\displaystyle{ c_2 }[/math] and [math]\displaystyle{ c_3 }[/math] should be transmitted the following fractions of time [R. S. Orr and B. Veytsman, 2002] [1]:
where the three time fractions sum to unity as one could expect. The resulting signal with interlacing of the majority vote of [math]\displaystyle{ c_1,c_2 }[/math]and [math]\displaystyle{ c_3 }[/math] and the solo chips [math]\displaystyle{ c_2 }[/math] and [math]\displaystyle{ c_3 }[/math] will be referred to as [math]\displaystyle{ \left\{Maj\left(c_1,c_2,c_3\right),c_2,c_3\right\} }[/math].
The interpretation of the time fractions is as follows. Let us assume that each data bit contains 100 chips and that the commanded power distribution is [math]\displaystyle{ \left\{G_1,G_2,G_3\right\}=\left\{1,4,9\right\} }[/math]. According to (61), the fraction of time values will be [math]\displaystyle{ \left\{t_{Maj},t_{c_2},t_{c_3}\right\}=\left\{2/5,1/5,2/5\right\} }[/math]. This means that out of the 100 chips of a bit, 40 of them will directly correspond to the highest gain code [math]\displaystyle{ c_3 }[/math]. In the same manner other 20 will be devoted to the medium gain code [math]\displaystyle{ c_2 }[/math] and the final 40 chips are determined following the true majority vote of the chips of the three codes as described in the sections before.
As shown by [G. L. Cangiani et al., 2002][12], the interest of this approach is that the combining losses are distributed uniformly over the three original signals in such a way that all the signals will suffer the same percentage loss of power at the end. The efficiency of the three-code multiplex is shown to adopt the following form:
which can also be expressed as follows taking as power reference the signal with the lowest gain, namely [math]\displaystyle{ c_1 }[/math]
Next figure depicts the efficiency of the majority vote as a function of the power ratios [math]\displaystyle{ G_2/G_1 }[/math] and [math]\displaystyle{ G_3/G_1 }[/math]:
As we can recognize in the previous figure, when the power of one of the codes is much larger than that of either of the other two, the multiplex efficiency approaches 100 %. However, in the case that one code is much smaller than the other two, the efficiency deteriorates to approximately 50 %. If the three codes are of similar power levels, the efficiency is then close to 75 %. Although the previous values are only valid for the three-code multiplexing case, similar behaviours are found when the number of codes to multiplex increases. Indeed, majority voting presents the great inconvenience that in presence of a few codes monopolizing the total power, the efficiency reduces considerably.
In the previous three-code case, the solution for the time fraction of each individual code or majority voted code was unique. Indeed, the number of target parameters was two (since the three power ratios sum to unity there are only two free ratios) and the number of free variables was also two (since the three time fractions also sum to unity, only two are free). However this is not the case always. Indeed, when the number of signals to multiplex increases, so does also increase the number of possible solutions of the fractions of time that deliver the targeted power ratios. As an example, in the five-code case, the Generalized Majority Vote (GMV) signal could consist of the following signals [G. L. Cangiani et al., 2002] [12]:
- One five majority-vote code
- [math]\displaystyle{ \binom{5}{3} }[/math] three-way code combinations
- Four solo chips. It must be noted that the weakest code is not used as shown above
- [math]\displaystyle{ \binom{5}{4} }[/math] four-way code combinations where one of the codes is weighted twice.
If we consider all the potential components of the GMV signal described above, there are a total of 20 elements to consider. Nevertheless, since the fraction times must sum to unity, the real number of free variables is actually 19. This number is however by far greater than the commanded powers (four in the case of five multiplexed signals) so that at the end the most efficient multiplex can only be discovered by a search technique of all potential combinations.
Last but not the least, it is important to mention that the cyclostationary solutions depicted in previous lines also fall in the general mathematical description given by (18). In fact, recalling the general equation of GMV, the majority voted code will adopt the following form
taking on the sub-majority vote factors [math]\displaystyle{ \lambda_i\left(k\right) }[/math] values of 0 or 1 in this case. It is important to mention here that while in the approach of previous lines the weighting was achieved by averaging over the time, in the section “Majority Voting: Scalar Combination with Non-Uniform Weighting” this effect was mainly reached by selecting a proper weighting factor.
References
- ^ a b c d e f g h i j k l m n o p q r s t u [R. S. Orr and B. Veytsman, 2002] R. S. Orr and B. Veytsman., Methods and Apparatus for Multiplexing Signal Codes via weighted Majority Logic, Patent number WO/2002/069516, International Application No.: PCT/US2002/005657, Publication date: 6 September 2002
- ^ a b [M. F. Easterling, 1962] M. F. Easterling, A Skin-Tracking Radar Experiment Involving the COURIER Satellite, IRE Proceedings of IEEE Transaction Space Electronics and Telemetry, vol. SET-8, pp. 76-64, June 1962.
- ^ [D.J. Braverman, 1963] D.J. Braverman , A Discussion of Spread Spectrum Composite Codes, Aerospace Corporation, Report TDK-269, December 1963
- ^ [R.C. Tausworthe, 1971] R.C. Tausworthe, Practical Design of Third-Order Phase-Locked Loops, Jet Propulsion Lab., California Institute of Technology (internal document) TR 900-450, pp. 19-30, April 1971
- ^ [J.J. Spilker, 1977] J.J. Spilker, Digital Communications by Satellite, Prentice Hall Inc., Englewood Cliffs, N.J., pp. 600-603, 1977
- ^ [A.R. Pratt and J.I.R. Owen, 2005] A.R. Pratt and J.I.R. Owen, Signal Multiplex Techniques in Satellite Channel Availability - Possible Applications to Galileo, Proceedings of the International Technical Meeting of the Institute of Navigation, ION-GNSS 2005, 13-16 September 2005, Long Beach, California, USA.
- ^ [G.L. Cangiani et al., 2004] G.L. Cangiani, R. Orr, C.Q. Nguyen; Intervote Modulator, European Patent 1334595, granted 22nd December 2004.
- ^ a b c d e f g [J.J. Spilker Jr. and R.S. Orr, 1998] J.J. Spilker Jr. and R.S. Orr, Code Multiplexing Via Majority Logic for GPS Modernization, ION GPS 1998, 15-18 September 1998, Nashville.
- ^ [W. Feller, 1957] W. Feller, Introduction to Probability Theory and Its Applications, John Wiley and Sons, New York, 1957, p. 54.
- ^ a b [P.A. Dafesh et al., 2006] P.A. Dafesh, Nguyen, M. Tien, Quadrature product sub-carrier modulation system, Patent US 7120198, Granted 10 October 2006.
- ^ [M. Abramovitz and I.A. Stegun, 1965] M. Abramovitz and I.A. Stegun, Handbook of Mathematical Functions, Dover Publications, 1965, New York, USA.
- ^ a b c d [G. L. Cangiani et al., 2002] G. L. Cangiani, R.S. Orr and C.Q. Nguyen, Methods and Apparatus for Generating a Constant-Envelope Composite Transmission Signal, Patent number WO/2002/28044, International Application No.: PCT/US2001/30135, Publication date: 4 April 2002