**Involved members: J.-C. Belfiore, P. Ciblat, W. Hachem, G. Rekaya-Ben Othmann, O. Rioul, P. Solé, A. Tchamkerten, M. Wigger**

**Fundings****:**** European funding (NoE NEWCOM++, CELTIC/SASER), ANR (ORIANA, RISC, TCHATER), FUI (100GFLEX), a few Telecom foundation PhD fellowships. **

* Asynchronism mitigation:* Synchronization is an important component of any communication system and one of the main thrust of our research effort. To understand the importance of synchronization, it is helpful to consider two opposite types of applications. In the first type, transmission of data happens on a continuous basis. Examples include voice and video. The cost of initially acquiring synchronization, say by sending a pilot sequence, is small since it is amortized over the many symbols transmitted. In the second type, transmissions are very bursty, with amounts of data transmitted once in a long while. Examples include sensor networks with sensor nodes transmitting measured data once in a while. Here the cost of acquiring synchronization is more significant because the number of information bits transmitted per burst is small. What are the fundamental limitations due to a lack of a priori synchrony between the transmitter and the receiver in bursty communication? This question has been investigated for point-to-point communication in by considering an extension of Shannon's classical communication model. In these works fundamental trade-offs between communication rate, asynchronism level, delay, error probability, and output sampling rate have been derived and corresponding optimal communication strategies have been investigated. One of the many surprising conclusions is that training based schemes where synchronization and information transmission are carried using separate degrees of freedom can be very suboptimal. In these cases, efficient codes integrate error correction and detection properties which prompted the investigation of bounded weight or spherical codes. We also investigated the impact of asynchronism for multiuser communication, specifically for relay networks without a direct line of sight between the source and the destination, we constructed delay-tolerant codes based on cross product of cyclic division algebras and analyzed the performance of such schemes in terms of outage probability. These codes have the compelling properties of achieving full rate, full diversity, and a non-vanishing determinant similarly to perfect codes in the synchronous case.

* Fading mitigation: *A popular way to cope with fading is to introduce space diversity (through so-called MIMO techniques). To achieve this we focused on space-time code design and introduced for the first time codes over rings with non Hamming metrics which are then used to construct space-time codes by a concatenation process similar to the so-called Construction A of lattices. The implementation of space-time codes requires low complexity decoding algorithms, in particular in the context of MIMO channels where the optimal decoding rule has a complexity that grows exponentially with the number of antennas. In this context, we proposed three very efficient algorithms: the SB-Stack, the Algebraic reduction and the augmented LLL reduction. For example, the SB-Stack is a sequential decoder which combines the tree search strategy of the original stack decoder with the search region of the sphere decoder. Finally, we investigated space diversity through relaying schemes. In this framework, we proposed a new protocol, called the DoQF, which combines a decoding step with a quantified step. We proved that the performance of this protocol is very close to the so-called Diversity-Multiplexing gains Trade-off (DMT), and hence outperforms the existing ones. The best known protocol for cooperative communication is the Dynamic Decode and Forward (DDF) protocol according to the DMT. We were interested in the practical implementation of this protocol when the source does not know whether a relay is present. We defined a new metric called Macro diversity (coming from long term SNRs) which represents the number of links necessary to achieve some QoS when all other links experience very low SNRs. We proposed so-called patching techniques in order to maximize the micro and macro diversity which resulted in very efficient schemes: patched Monostream, Patched Alamouti, Patched Golden Code, and Patched Silver Code.

* Optical fiber based communications: *Due to emergence of new applications (e.g., video streaming, cloud computing), the amount of data in the optical core networks have strongly increased. To handle the saturation of the core network, advanced digital communication tools have to be applied to the optical communications field. Indeed, due to recent technological progress, the information (passing through the optical fiber) can now rely on both the intensity and the phase. Therefore standard wireless digital communications can now be advocated for optical communications. In this new paradigm, we have focused on various facets. Phase estimation represents an key issue in optical communication. We studied low complexity phase estimators for phase uncertain channels under BPSK and QAM modulation, and proved their asymptotic optimality---via a conditional gradient descent algorithm. Interestingly, these estimators can have very simple expressions depending on the modulation. Inter-symbol interference---generated by the dispersion of the fiber or by the polarization mixing---and the carrier frequency offset represent other major issues often mitigated through sample by sample estimation techniques. We proposed an alternative method which uses the property that the channel often varies slowly over time and hence allows the use of estimation methods which operate over blocks of samples. This alternative enables to drastically reduce the estimation error for a given number of samples \cite{jlt-selmi}. It is well known that polarization multiplexed optical systems can be seen as MIMO systems thereby allowing the use of space-time coding techniques for which the team has a renowned expertise. We exhibited space-time codes that efficiently mitigate polarization dependent loss (PDL) impairments, and characterized their performance in terms of error and outage probability. Ultimately, the limits to optical communication are obtained through information theoretic arguments. In this context we derived channel capacities when nonlinear impairments occur and also when only intensity based detector is carried out.

**Involved members: J.-C. Belfiore, P. Ciblat, W. Hachem, G. Rekaya-Ben Othmann, A. Tchamkerten, M. Wigger **

**Fundings: **** Europ****ean fundings (FP7/SMARTEN, FP7/LEXNET), DGA PhD fellowship, Emergence Grant from the City of Paris (PINS). **

** Interference management: **A major impairment in modern networks (cellular or ad hoc ones) is user interference (rather than the noise). Different ways exist to handle it. The first one is through Physical Layer Network Coding (PLNC). We considered cooperative systems with several pairs of source/destination communicating at the same time using the same physical resources. In that case, we are facing interference problems which degraded drastically the system performance. The PLNC aims to optimize the throughputs and to enhance system performance. The protocol Compute and Forward has been recently proposed for PLNC. We have study the optimization and implementation of this protocol using algebraic tools and lattice theory. We have also proposed optimal and sub-optimal decoding methods at the relay side. Another way to handle interference is through power and bandwidth allocation. Assuming a fast fading channel model with or without decision feedback, we designed new algorithms for mitigating the multi-cell interference and computed the best frequency reuse factor. We showed that a proper use of feedback can significantly reduce the throughput per unit energy. When the network is centralized (typically the mobile cellular systems), feedback signals can be sent to the transmitters in order to help interference management. We studied the benefits of such feedback signals on the largest data rates (called capacity) that can be achieved over broadcast channels (BC) and over multi-access channels (MAC). We proved that any arbitrary small (but nonzero) number of feedback bits strictly increases the capacity of \emph{strictly less-noisy} BCs. We further showed that for some Gaussian BCs the gain in capacity can even be unbounded when the feedback is perfect, and for others it allows to achieve the same rates as if the two receivers could cooperate in their decoding. To prove these results we proposed new coding schemes. For some memoryless BCs our new schemes are optimal and achieve the capacity with feedback. We further showed that an estimation-based coding scheme is optimal among a wide class of linear-feedbacks scheme.

** Distributed Information Processing: **This topic, at the edge between signal processing, coding, and information theory, has seen tremendous research activities recently. It encompasses many different subfields including consensus reaching, distributed computation, distributed estimation/detection, and distributed optimization. For each of these we provided interesting contributions. For consensus reaching, we proposed new distributed algorithms for maximum and average functions computation that are well suited for wireless communications. For distributed computation, we characterized upper and lower bounds on the minimum amount of information that needs to flow across certain types of networks so that a receiver can reliably compute a given function of sources of information. For distributed estimation we investigated two settings. In the first setting, all the nodes in a network want to estimate a certain parameter related to its observations. In the second setting there is only one node (the fusion center) who wants to estimate the parameter. For the first case, we developed new algorithms based on so-called stochastic approximation. In particular, the asymptotic performance of these algorithms have been obtained. For the second case, we investigated the performance of the optimal Neyman-Pearson detector at the fusion center. For distributed storage, we investigated a setting where receivers---who wish to retrieve the stored or compressed data---have some a priori side-information about this data (this information can be obtained by measuring correlated data for instance). For a class of single-encoder two-decoder systems (the Kaspi/Heegard-Berger setup) we derived the maximum possible compression ratio that still allows the decoders to reconstruct their intended data with the desired precision. We also demonstrated that when the two decoders need to reconstruct their intended data perfectly, then knowledge of the side-information at the encoder strictly improves the maximum compression ratio.

**Involved members: J.-C. Belfiore, O. Rioul, P. Solé, A. Tchamkerten**

**Fundings: European fundings (FP7/PHYLAWS, ENIAC), Google Award PhD fellowship, Telecom Foundation PhD fellowship. **

Secure communication has been investigated at the physical and at the application layers. Security at the physical layer has been recently proposed as a means to enhance or complement security at the application layer (cryptography). A classical model is the so-called (Gaussian) wiretap channel where a passive eavesdropper tries to decode a message sent to a legitimate receiver. For this channel we proposed code constructions based on nested lattice codes which also operate under a MIMO setting. The code performance are related to the theta series of the lattice in the single antenna case and to some zeta function, Epstein or Solomon, in the MIMO case. These results gave a design criterion for lattice codes. The same model but in a multi-user setting has been investigated and corresponding fundamental rate-equivocation trade-offs have been derived derived. At the application layer, we proposed an authentication protocol for RFID applications. This is one of the most efficient and lightweight protocols currently available which thwarts many active attacks, including the Mafia fraud. Finally, we investigated so-called side attacks in which information is gained from the physical implementation of a cryptosystem. We developed a hypothesis test framework which applies in the situation where the side attack aims at extracting cryptographic keys from a device by analyzing its leakage knowing its input or output. The correct key is distinguished from the bad key by selecting the key that maximizes some distinguisher such as DPA, CPA or MIA. In this work, we analyze the distinguishers' efficiency for certain types of leakage (especially in the presence of countermeasures like masking) which are independent of the device.

**Involved members: W. Hachem, O. Rioul, A. Tchamkerten**

**Fundings : ANR chair of excellence (ACE), ANR (DIONISOS), Digiteo grant (DESIR).**

The investigation of communication networks also represents an opportunity to develop analytic tools which can then be applied in other contexts, thereby spurring interdisciplinary research. An important effort has been devoted to the theory of large random matrices. Random matrices appear in a wide range of applications, ranging from MIMO systems in communications to portfolio optimization in finance. Besides theoretical results related to the behavior of the eigenvalues and eigenspaces of certain classes of such matrices, new algorithms have been designed for estimating the angles of arrivals and source powers in the context of array processing or for detecting failure in large sensor networks. Motivated by feedback communication, we proposed a natural generalization of the classical Bayesian change-point detection setup by letting the change-point be a stopping time with respect to an unobserved process. This new statistical inference framework is relevant in a number of areas including forecasting, communication, and monitoring. We investigated optimal detection policies for different stopping times and different classes of observation processes. While most useful information theoretic inequalities can be deduced from the basic properties of entropy or mutual information, Shannon's entropy power inequality (EPI) was an exception. We derived a unified view of the existing proofs by showing that they share two essential ingredients: a data processing argument and an integration over a path of a continuous Gaussian perturbation. Using these, we developed a new and brief proof of the EPI through a mutual information inequality. Fitts' law is a well-known model of human pointing movement in experimental psychology. We reviewed the celebrated stochastic optimized-submovement theory proposed by Meyer to show that it implies a quasi-logarithmic (Shannon-like) model, rather than a quasi-power model. Also, by testing the prediction that throughput is conserved across variations of speed/accuracy, we found it to be affected by the strategy. This pleads against a currently popular definition of throughput.