at 11am in Amphi Grenat

Abstract:

Cloud based wireless networks named also as Cloud Radio Access Networks (C-RANs) emerge as appealing architectures for next-generation wireless/cellular systems whereby the processing/encoding/decoding is migrated from the local base-stations/radio units (RUs) to a control/central units (CU) in the "cloud". The network operates via fronthaul digital links connecting the CU and the RUs (operating as relays). The uplink and downlink are examined from a network information theoretic perspective, with emphasis of simple oblivious processing at the RUs, which is attractive also from the practical point of view. The analytic approach, applied to simple wireless/cellular models illustrates the considerable performance gains associated with advanced network information theoretically inspired techniques, carrying also practical implications. An outlook, pointing out interesting theoretical directions, referring also to Fog radio access networks (F-RAN), concludes the presentation.

Professor Shlomo Shamai is a distinguished professor at the Department of Electrical engineering at the Technion ? Israel Institute of Technology. Professor Shamai is an information theorist and winner of the 2011 Shannon Award. Shlomo Shamai received the B.Sc., M.Sc., and Ph.D. degrees in electrical engineering from the Technion, in 1975, 1981 and 1986 respectively. During 1975-1985 he was with the Israeli Communications Research Labs. Since 1986 he is with the Department of Electrical Engineering at the Technion?Israel Institute of Technology, where he is now the William Fondiller Professor of Telecommunications. His research areas cover a wide spectrum of topics in information theory and statistical communications. Prof. Shamai is an IEEE Fellow and a member of the International Union of Radio Science (URSI

**at 1pm in A301**

**Abstract: **

In compressing large files, it is often desirable to be able to efficiently recover and update short fragments of data. Classical compression schemes such as Lempel-Ziv are optimal in terms of compression rate but local recovery of even one bit requires us to decompress the entire codeword. Let us define the local decodability of a compression scheme to be the minimum number of codeword bits that need to be read in order to recover a single bit of the message/raw data. Similarly, let the update efficiency be the minimum number of codeword bits that need to be modified in order to update a single message bit. For a space efficient (entropy-achieving) compression scheme, what is the smallest local decodability and update efficiency that we can achieve? Can we simultaneously achieve small local decodability and update efficiency, or is there a tradeoff between the two? All this, and more (including a new compression algorithm) will be discussed in the talk.

**at 1pm in A301**

**Abstract: **

Point lattices are the Euclidean counterparts of error correcting codes. One of the most challenging lattice problem is the decoding problem: given a point in the Euclidean space, find the closest lattice point. As it is a classification problem, it seems natural to tackle this problem via neural networks. Indeed, following the deep learning revolution, started in 2012, neural networks have been commonly used for classification tasks in many fields. The work presented in this talk is a first theoretical step towards a better understanding of the neural network architecture that should be more suitable for the decoding problem.

Namely, we first establish a duality between the decoding problem and a piecewise linear function with a number of affine pieces growing exponentially in the space dimension.

We then prove that deep neural networks need a number of neurons growing only polynomially in the space dimension to compute some of these functions, whereas this complexity is exponential for any shallow neural network.

This has two consequences:

1 – In some cases, deep neural networks can be used to implement low complexity decoding algorithms.

2 - Euclidean lattices provide good examples to illustrate the benefit of depth over width in representing functions with neural networks.

CEA-LIST, Saclay

**Jeudi 18 avril, 14H, Télécom ParisTech, Amphi OPALE, 46 rue Barrault, Paris 13**

Artificial Intelligence, and more particularly Deep Learning are enablers of new applications, and allow computing systems to better interact with the real world by extracting information from signals, images and sounds. But they are very demanding in computing power and therefore energy, and pose challenges for being used in embedded systems. This presentation will present some uses cases, tools, approaches and hardware allowing to select neural networks and hardware implementations tuned for embedded applications (Deep Learning at the edge)..

**Salle C48, 14H, Télécom ParisTech, 46 rue barrault, Paris 13**

For the last decades, the exchange rates in telecommunication standards have exponentially increased thanks to major innovations at many levels in radio-transceivers. The 5th generation of mobile standards (5G) targets even higher data rates than previous cellular standards. The increase of data rates is made possible by using larger transmission bandwidths (up to several hundred of MHz) and by using modulation schemes with high spectral efficiency such as OFDM-based modulations. One drawback of these waveforms is that they are very sensitive to any non-linearity in the transceiver and require highly linear transmitters to maintain signal quality and spectral purity. In parallel, one of the other major goals of 5G is to reduce energy consumption or maintain equal energy consumption while providing more services.

In radio-transceivers, there is one component for which these two constraints are in complete opposite design directions: the power amplifier (PA). In the last few years, the digital predistortion technique (DPD) has been developed to improve the PA linearity/efficiency trade-off. This technique consists in distorting the signal before amplification with the inverse characteristic of the PA in order to correct the distortions caused by the PA during amplification. Although this technique has proven itself over the past generations of mobile telephony, it reaches its limits for the 5G because of the very wide bandwidths.

In this talk we will review the main recent and promising solutions to implement efficient predistortion techniques for future wideband transceivers. We will cover the most common approaches for PA modeling and for computing predistorter models and discuss the implementation of the predistortion for RF transmitter highlighting the advantages and drawbacks of different approaches. We will elaborate on promising solutions addressing the bandwidth limitations of digital predistortion.

Dependable and Scalable FPGA Computing Using HDL-based Checkpointing.

Outline:

Motivation

Related work

Main contributions:

Reduced set of state-holding elements

Consistent snapshots in FPGA checkpointing

Checkpointing architecture

Static analysis of HDL source code

Checkpointing flow for the resilience of FPGA computing

Multitasking on FPGA using HDL-based checkpointing

Hardware task migration on heterogeneous FPGA computing using HDL-based checkpointing

Python-based tool for checkpointing insertion.

Conclusion

For full abstract of the research, please find it in the attachment.

**at 1.30pm in A301**

**Abstract**

Caching systems have been employed to reduce backhaul traffic and improve data availability. In many of caching applications like sensor networks and satellite broadcasting, data at the remote sources are dynamic and the changes can not be pushed to the cache immediately due to communication resource constraint. When a user requests data from the cache, the data from the cache may be out-dated. To guarantee data freshness from the perspective of the caching users, optimal caching update strategies need to be designed. Previous results reveal that when the update duration for each file is a constant and identical among the files, the optimal inter update interval for each file should be proportional to the square root of its popularity.

In this talk, we address the following problem: when the update duration for each file is non-uniform and age related, how to design file update strategies such that the overall freshness of the caching systems can be guaranteed? We formulate the problem into a relaxed optimization problem and proposed the corresponding scheduling strategy. We found that when the update duration is dynamic and age-related, the AoI optimal inter update interval may be far from the square root law of previous results.

**Amphi OPALE, 14H, Télécom ParisTech, 46 rue barrault, Paris 13**

Physical modelling of the wireless propagation channel, based on accurate representation of the environment, plays a major role in the assessment and optimization of new 5G network topologies, including both the radio access and the last-mile xhaul segments. It will be shown in this talk how the physical propagation techniques are today employed in the industry to either simulate the millimeter-wave radio links, design ultra-dense networks, or predict the massive MIMO performance in real deployment scenarios. Complementary to on-field trials, the simulation contributes to the elaboration of the mobile operator’s strategies. New infrastructure investments and new antenna’s installations are decided after determination of the best scenario. Several use cases will be illustrated, such as 5G network densification, millimeter-wave mesh backhauling, and FWA (Fixed Wireless Network) planning. Finally, it will be shortly explained how the ray-tracing technique is extended and now applied for investigation of the sub-THz spectrum (promising 6G wireless candidate).

Robot Operating System (ROS) is a well know communication layer for writing robot software.

Its goal is simplifying the hard task to create truly robust, general-purpose and collaborative

robot software. Collaborative means that ROS was designed specifically for letting different

groups, each one expert in a specific field, to collaborate and build upon each other's work,

using a common convention.

In this seminar we will firstly explore the ROS world and how to use it for practical projects.

Then we will have a look to the most used tools that allows the simulation and the verification

of various rover actors, such as sensors, actuators, mechanic components, etc. Modeling (of both

rover's components and the external environment) is strongly used in this context as well as

simulation. In the LabSoc domain, we will see that the described framework can be well binded

with TTool.

**at 1pm in A301**

**Abstract:**

Distributed computing has emerged as one of the most important paradigms to speed up large-scale data analysis tasks such as machine learning. A well-known computing framework, MapReduce, can deal with tasks with large data size. In such systems, the tasks are typically decomposed into computing map and reduce functions, where the map functions can be computed by different nodes across the network, and the final outputs are computed by combining the outputs of the map functions with reduce functions. In such systems, one important problem is when there are straggling nodes, who computes too slow, or failed to work due to some reason, how to conduct computations.

In this talk, the optimal storage-computation tradeoff is characterized for a MapReduce-like distributed computing system with straggling nodes, where only a part of the nodes can be utilized to compute the desired output functions. The result holds for arbitrary output functions and thus generalizes previous results that restricted to linear functions. Specifically, in this work, we propose a new information-theoretical converse and a new matching coded computing scheme, that we call coded computing for straggling systems (CCS).

**Amphi OPALE, 14H, Télécom ParisTech, 46 rue barrault, Paris 13**

When we browse the internet, we expect that our social network identities and web activities will remain private. Unfortunately, in reality, users are constantly tracked on the internet. As web tracking technologies become more sophisticated and pervasive, there is a critical need to understand and quantify web users' privacy risk. In other words, what is the likelihood that users on the internet can be uniquely identified from their online activities?

This talk provides an information theoretic perspective on web privacy by considering two main classes of privacy attacks based on the information they extract about a user. (i) Attributes capture the user's activities on the web and could include its browsing history or its memberships in groups. Attacks that exploit the attributes are called “fingerprinting attacks,” and usually include an active query stage by the attacker. (ii) Relationships capture the user's interactions with other users on the web such as its friendship relations on a certain social network. Attacks that exploit the relationships are called “social network de-anonymization attacks.” For each class, we show how information theoretic tools can be used to design and analyze privacy attacks and to provide explicit characterization of the associated privacy risks.

**in A301 at 1pm **

**Abstract:**

The entropy power inequality (EPI), first introduced by Shannon (1948), states that the entropy power of sum of two independent random variables X and Y is not less than the sum of the entropy powers of X and Y. It finds many applications and has received several proofs and generalizations, in particular for dependent variables X and Y. We propose new conditions under which the EPI holds for dependent summands and discuss the implications of this result. This is a joint work with Mohammad Hossein Alamatsaz.

The well-known uncertainty principle used in physics in based on Kennard-Weyl's inequality (1928) and was strengthened in terms of Shannon's entropy, leading to the entropic uncertainty principle (EUP). The EUP was conjectured by Hirschman (1957) and finally proved by Beckner (1975) based on Babenko's inequality with optimal constants. Beckner's proof of Babenko's inequality is extremely difficult and the resulting derivation of the EUP is indirect (via Renyi entropies). A simple proof was recently published in Annals of Physics (2015) which turns out to be very questionable. We give a simple proof of a weaker, "local" EUP using the Hermite decomposition. This is a joint work with Olivier Rioul.

NOKIA Bell Labs

**Amphi OPALE, 14H30, Télécom ParisTech, 46 rue barrault, Paris 13**

Located at the meeting point between telecom operators and over-the-top service providers, metro networks are particularly well-suited for the introduction of radical acceleration of dynamics in the optical networks, leveraging elastic building blocks such as transponders and optical nodes. In this talk, we review innovative solutions which could be used to address some of the challenges of metro networks in the short-medium term (e.g. 2-5 years from now). In particular, we discuss how to mitigate filter impairments thanks to monitoring. We then highlight how machine learning could automate optical networks.

Summary:

Recent wireless communication standards (4G, 5G) need dynamic adjustments of transmission parameters (e.g., modulation, bandwidth), making traditional static scheduling approaches less and less efficient.

To schedule these applications we designed Odyn, a hybrid approach for the scheduling and memory management of periodic dataflow applications on parallel, heterogeneous, Non-Uniform Memory Architecture (NUMA) platforms.

In Odyn, the ordering of tasks and memory allocation are distributed and computed simultaneously at run-time for each Processing Element. We also propose a mechanism to prevent deadlocks caused by attempts to allocate

buffers in size-limited memories.

Fellow of Academy of Engineering Singapore, Fellow of IEEE, President, IEEE Circuits and Systems Society

**Séminaire général Comelec"Energy Efficient System Architecture for Devices in Artificial Intelligence-of-Things"Amphi THEVENIN, 14H, Télécom ParisTech, 46 rue barrault, Paris 13**

Abstract

Internet-of-Things (IoT) is the inter-networking of physical devices, vehicles, buildings, and objects with embedded sensors. It is estimated that by 2020 there will be more than 34 billion IoT devices connected to the Internet. Nearly $6 trillion will be spent on IoT solutions over the next five years. Artificial Intelligence (AI), on the other hand, is intelligence demonstrated by machines that work and react like humans. The combination of AI and IoT gives birth of Artificial Intelligence-of-Things (AIoT). AIoT devices differ from IoT devices that not only they sense, store, transmit data but also analyze and act on data, i.e. the AIoT device makes a decision or perform a task similar to what a person could do. The enabling technology for the AIoT device is embedded AI. This talk will cover the energy efficient system architecture that utilizes the event-driven signal representation. The event-driven signal representation enables data compression at the input source, which greatly reduces the power for data transmission and processing. We will show by examples that the event-driven system significantly improves energy efficiency and is well suited for AIoT applications.

Biography:

Dr. Yong Lian received the B.Sc degree from the College of Economics & Management of Shanghai Jiao Tong University in 1984 and the Ph.D degree from the Department of Electrical Engineering of National University of Singapore (NUS) in 1994. His research interests include low power techniques, continuous-time signal processing, biomedical circuits and systems, and computationally efficient signal processing algorithms. His research has been recognized with more than 20 awards including the 1996 IEEE Circuits and Systems Society's Guillemin-Cauer Award, the 2008 Multimedia Communications Best Paper Award from the IEEE Communications Society, 2011 IES Prestigious Engineering Achievement Award, 2013 Outstanding Contribution Award from Hua Yuan Association and Tan Kah Kee International Society, and the 2015 Design Contest Award in 20th International Symposium on Low Power Electronics and Design. He is also the recipient of the National University of Singapore Annual Teaching Excellence Awards in 2009 and 2010, resectively.

Dr. Lian is the President of the IEEE Circuits and Systems (CAS) Society, a member of IEEE Fellow Committee, a member of IEEE Biomedical Engineering Award Committee, a member of Steering Committee of the IEEE TBioCAS. He was the Editor-in-Chief of the IEEE TCAS-II from 2010 to 2013, Vice President for Publications of CASS, Vice President for Asia Pacific Region, Chair of BioCAS and DSP TC, founder of several conferences including BioCAS, ICGCS, and PrimeAsia.

Screaming Channels: When Electromagnetic Side Channels Meet Radio Transceivers From how far

can we mount electromagnetic side channel attacks? Are we limited to close physical proximity

or can we go much further? During this seminar we will see how mixed signal chips with radios

(e.g., BLE) unintentionally transmit side channel information together with the intended radio

signals, at a considerable distance. We will show how we have discovered this type of leak and

how we can exploit it to break AES at several meters (we have a proof of concept at 10 m in an

anechoic room).

The session will be interactive, with some demos.

LTCI / Comelec / SSH

**Amphi OPALE, 14H, Télécom ParisTech, 46 rue barrault, Paris 13**

**"Attaques par injection de fautes et contre-mesures : passé, présent, futur"**

Les attaques par injection de fautes sont des techniques extrêmement puissantes pour extraire des secrets d'un circuit intégré. Les toutes premières contre-mesures, développées il y a une vingtaine d’années, ont posé les bases des stratégies de protection. Ce séminaire débutera par leur présentation, et nous vous en proposerons une classification. Une comparaison du coût de chaque contre-mesure sera également réalisée, ainsi qu'une analyse de leur niveau de sécurité, vis-à-vis des menaces existantes lors de leur publication, mais également vis-à-vis d'autres menaces plus récentes.

Ces dernières années, les perturbations électromagnétiques intentionnelles ont suscité un grand intérêt comme moyen d'injection de fautes, à la fois pour des aspects pratiques mais surtout pour leur potentiel à contourner certaines stratégies de protection. Il en découle un besoin de comprendre finement l'impact de telles injections au sein des circuits intégrés. Cependant, les méthodes de caractérisation et de modélisation de l'état de l'art se sont avérées incomplètes, et nous vous exposerons dans la seconde partie de ce séminaire les améliorations que nous y avons apportées, et les résultats ainsi obtenus.

Title: Performance Evaluation of NoCs Using Network Calculus

Keywords: NoCs, Wormhole routing, backpressure, timing analysis, delay bounds

Summary:

Conducting worst-case timing analyses for wormhole Networks-on-chip (NoCs) is a fundamental

aspect to guarantee real-time requirements, but it is known to be a challenging issue due

to complex congestion patterns that can occur. In that respect, we have introduced a new

buffer-aware timing analysis of wormhole NoCs based on Network Calculus. Our main idea

consists in considering the flows serialization phenomena along the path of a flow of

interest (foi), by paying the bursts of interfering flows only at the first convergence

point, and refining the interference patterns for the foi accounting for the limited

buffer size. Moreover, we aim to handle such an issue for a large panel of wormhole NoCs.

The derived delay bounds are analyzed and compared to available results of existing

approaches, based on Scheduling Theory as well as Compositional Performance Analysis

(CPA). In doing this, we have highlighted a noticeable enhancement of the delay bounds

tightness in comparison to CPA approach, and the inherent safe bounds of our proposal

in comparison to Scheduling Theory approaches. Finally, we perform experiments on a

manycore platform, to confront our timing analysis predictions to experimental data

and assess its tightness.

**in A301 at 1.30pm **

**Abstract : **

Physically unclonable functions (PUF) are used in various applications

requiring robust authentication. These systems exploit unpredictable

process variations in electronic circuits. These process variations

uniquely identify the produced hardware, which exhibit distinct

properties in terms, for example, of delay propagations inside the

circuit. By measuring and exploiting these properties, one can determine

a "fingerprint" of the circuit, which can not be physically replicated.

This fingerprint can then be used, for instance, to produce a

cryptographic key. The advantage is that this key does not need to be

explicitly stored, which reduces the security risk. Other applications

include challenge-response protocols, where the responses are determined

from the physical properties of the circuit.

For a given type of PUF, the Loop-PUF, these delay propagation

differences can be modeled by n Gaussian random variables. A challenge

corresponds to a vector of +/- 1 values, and an identifier bit is the

sign of the signed sum of the Gaussian realizations, with signs

corresponding to those of the challenge vector. We try to adress the

following question: what is the joint entropy of these sign bits ?

The exact calculation of the maximum entropy, when considering the set

of all possible challenges, can be carried out only for very small

values of n. We provide a combinatorial extension that provides the

exact values for n = 3 and 4. For n greater or equal to 5, the method

soon becomes intractable and one has recourse to numerical computations.

The value of the maximum entropy can be estimated reliably by defining

equivalence classes of challenges corresponding to the same value of

joint probabilities. This method was found to be numerically tractable

for values of n up to 7. Asymptotic expressions for the max-entropy are

found using the theory of threshold boolean functions.

Laboratoire de Physique des Lasers, Université Paris 13

**Amphi JADE, 14H, Télécom ParisTech, 46 rue barrault, Paris 13**

**""Métrologie des fréquences optiques et peignes de fréquences compacts""**

Les peignes de fréquences compacts sont des outils métrologiques ayant vocation à être intégrés dans des systèmes embarqués dédiés entre autres à la génération d’horloges pour des applications spatiales, à la spectroscopie de précision et aux tests fondamentaux nécessitant des références de fréquences, à la génération d’ondes optiques/micrométriques/Terahertz ultra-stables pour des applications en télécommunications (radar, communication sans fil, communications optiques cohérentes, etc.).

Lors du séminaire je présenterai dans un premier temps les grands principes de la métrologie Temps-Fréquences : les outils expérimentaux (référence, cavité de transfert) et les outils mathématiques (variance d’Allan) nécessaire à la caractérisation métrologique des oscillateurs optiques, et ce, au travers de deux exemples de techniques de stabilisation en fréquence d’un laser : l’une sur une cavité ultra-stable (Pound-Drever-Hall) et l’autre sur une transition moléculaire par la technique d’absorption saturée en cellule.

Dans un second temps, je présenterai mon activité de recherche sur les peignes de fréquences compacts et les résultats de stabilisation et de référencement de lasers à semi-conducteurs à blocage de modes optiques passifs, sur une cavité de transfert à fibre référencée sur une transition d’Acétylène détectée en absorption saturée. Les avantages d’une stabilisation mixte conjuguant stabilisation optoélectronique et affinement du peigne par injection optique seront discutés.

**at 11am in F900,**

Abstract:

We consider transmission over a cloud radio access network (CRAN) focusing

on the framework of oblivious processing at the relay nodes (radio units),

i.e., the relays are not cognizant of the users' codebooks.

This approach is motivated by future wireless communications

(5G and beyond) and the theoretical results connect to a variety

of different information theoretic models and problems.

First it is shown that relaying a-la Cover-El Gamal, i.e.,

compress-and-forward with joint decompression and

decoding, which reflects 'noisy network coding,' is optimal.

The penalty of obliviousness is also demonstrated to be

at most a constant gap, when compared to cut-set bounds.

Naturally, due to the oblivious (nomadic) constraint the CRAN problem

intimately connects to Chief Executive Officer (CEO) source(s) coding

under a logarithmic loss distortion measure.

Furthermore, we identify and elaborate on some interesting

connections with the distributed information bottleneck model for which we

characterize optimal tradeoffs between rates (i.e., complexity) and

information (i.e., accuracy) in the discrete and vector Gaussian frameworks.

Further connections to 'information combining' and 'common reconstruction'

are also pointed out. In the concluding outlook, some interesting problems

are mentioned such as the characterization of the optimal input distributions

under users' power limitations and rate-constrained compression at the

relay nodes,

---------------------------------------------------------------------------

Joint work with: I.E. Aguerri (Paris Research Center, Huawei France)

A. Zaidi (Universite Paris-Est, Paris) and G. Caire (USC-LA and TUB, Berlin)

The research is supported by the European Union's Horizon 2020 Research And

Innovation Programme: no. 694630.

tomb (Linux only) is a cryptographic utility based on the Linux Unified Key Setup (LUKS)

standard and the disk encryption subsystem of Linux kernel (dm-crypt device mapper).

It can be used to encrypt directories, turning them into binary files.

It is very well designed and easy to use. pass is a password manager with many interesting

characteristics. It is open source, does not rely on third party (more or less trustable)

servers, integrated with git and tomb, supported by Firefox / Chrome... extensions, etc.

This seminar will briefly present these two tools and how they can be jointly used for

the highest security of your personal data. Then, we will demonstrate them by installing

all needed components, generating the keys and use them for real use cases.

TL2E - Sorbonne Université / Laboratoire d'Electronique & Electromagnétisme (L2E)

**Amphi B310, 14H, Télécom ParisTech, 46 rue barrault, Paris 13**

**"Spatial Data Focusing: an alternative to Beamforming for geocasting scenarios"**

The capability of an antenna to focus radiated signals into a well defined direction is fundamentally limited by its size (the smaller, the less directive), as the result of diffraction or, equivalently, owing to the properties of the Fourier transform. This applies to single antennas as well as to arrays of multiple antennas.

In this seminar, a Spatial Data Focusing technique is introduced as an alternative scenario to overcome antenna array's beamwidth limitations due to the finite aperture size. The proposed approach aims to focus the transmitted data rather than the transmitted power. This scheme enables wireless broadcast of information to specific spatial locations, using fewer antenna elements compared to classical beamforming techniques. Different configurations will be discussed to implement this scheme and it will be shown that focusing the data is spatially more selective than focusing the power.

**in A301 at 10am**

**Abstract**

5G Systems are expected to have a 1000-fold increase in mobile data traffic compared to 2015, largely increased mobile devices per unit area, and decreased latency, especially for M2M/D2D communications. Among the front-runner technologies to achieve such lofty aims are multi-tier heterogenous networks that combine outdoor RF macrocells and indoor gigabit (wide-band) small-cells, and massive MIMO. These technologies include very high carrier frequencies with massive bandwidths, extreme base station and device densities and unprecedented numbers of antennas.

In this talk, I will go over the work done in my group at the University of Waterloo in signal processing and algorithms design and analysis for 5G networks. The talk is divided into three main parts. First, I will briefly talk about hybrid beamforming for single and multi-user massive MIMO.

Second, I will present a brief analysis of some HetNets from the physical layer perspectives (i.e., outage probability and diversity/degrees of freedom analysis). Lastly, I will discuss techniques to simplify the detection (combined with decoding) algorithms at the receiver side.

Massive MIMO are used primarily to combat the high absorption rate of the mm-wave channels; however, this comes with a price: namely a large number of (expensive) RF chains (i.e., DAC and PA) as well as a huge overhead for training and channel knowledge feedback. Hybrid analog (RF) and digital (baseband) beamforming allows one to reduce the required number of RF chains as well as to have a limited feedback. The appropriate choice of beamforming codebooks combined with the selection algorithms can allow such simplification with a limited loss compared to the optimal solution.

Conventionally, HetNets are adopted to broaden the network coverage and alleviate the near-far problem in cellular networks.

In the literature, the cooperative benefits of HetNets are limited to load balancing and traffic offloading, or sharing resources. I will talk about the cooperation benefits in HetNets in terms of outage probability and diversity order. Finally, I will talk about how to reduce the complexity of near-optimal detection algorithms at the receiver side using established techniques such as lattice reduction and conditional optimization. **Bio:** Mohamed Oussama Damen is an Electrical and Computer Engineering Professor at the University of Waterloo.

Professor Damen has an extensive background in research positions at multiple academic institutions including École Nationale Supérieure des Télécommunications in Paris, France; the University of Minnesota and the University of Alberta. In June of 2004, he joined the University of Waterloo, where he then became the Nortel Networks Associate Chair in Advanced Telecommunications from April 2005 to April 2010.

His current research interests include coding theory (particularly regarding lattices, coding and decoding algorithms), cross-layer optimization, multiple-input multiple-output and space time communications, multiuser detection, and wireless communications.

Professor Damen has received several awards including the University of Waterloo ECE Research Excellence Award in 2007, Early Researcher Award in the Province of Ontario for 2007 to 2010 and the Junior Research Fellowship from the French Research Ministry in 1996 to 1999. Professor Damen has published numerous articles and journals with, and is currently a senior level member of, IEEE.

**Amphi B312, 14H, Télécom ParisTech, 46 rue barrault, Paris 13**

Le besoin en systèmes de mesure fonctionnant en totale autonomie existe depuis longtemps, mais ne pouvait être couvert jusqu’à présent que par des systèmes interrogeables sur place. Les données acquises n’étaient de ce fait pas toujours actualisées et ne donnaient qu’un instantané de l’environnement exploré plutôt qu’une surveillance permanente de celui-ci.

L'arrivée de véhicules autonomes intégrant des capteurs de technologies modernes (LIDAR, Radar, vidéo, etc.) ouvre de nouvelles perspectives dans ce domaine. Ces équipements constituent des moyens de plus en plus sophistiqués pour acquérir des informations de toute nature afin d’explorer un environnement donné. L'utilisation de capteurs couplée à un système autonome permet de réaliser une mission sans intervention externe. Cette approche permet de s'affranchir des contraintes d'infrastructure existantes ou de communication difficile comme par exemple des environnements présentant des infrastructures en panne ou détruites telles que les situations faisant suite à des accidents ou des catastrophes. Les informations collectées ont une finalité duale : d’une part, comprendre et modéliser l'environnement pour une bonne réalisation de la mission et d’autre part, réutiliser cette information dans un cadre plus large d'aide à la décision

Les données traitées sont produites par un ensemble de capteurs variés et déployés en réseaux pour la collecte en temps réel. Les données produites sont de divers types : données de distance obtenues par ders capteurs ultrasonores (temps de vol), données de distance obtenues par des capteurs optiques laser (LIDAR), données de position et d'attitude produites par les systèmes inertiels (accéléromètres, magnétomètres, gyromètres, etc.), données d'odométrie, données d'environnement telles que la température, la pression, etc.

Les difficultés résident d’abord dans l'utilisation de processeurs de basse puissance (capacité de calcul faible) pour des raisons d’optimisation énergétique dans un contexte de système embarqué critique et de prise en compte de l’incertitude des données. Cette difficulté intervient aussi bien en ce qui concerne la réduction des données acquises, leur traitement, que leur protection par des mécanismes logiques et cryptographiques.

Une autre question intéressante porte sur l’utilisation des données acquises : dans le cas où ces données doivent être protégées par des techniques cryptographiques, comment les rendre utilisables par le système autonome pour sa propre navigation tout en les protégeant contre des fuites intempestives ?

Après un état des difficultés et contraintes, plusieurs pistes de résolution seront abordées. Enfin, l'état actuel du projet et des divers prototypes actuel seront présentés.

**at 2.30pm in A301 **

**Abstract:**

The main goal of future cellular communication systems is to handle high data rate requirement of users with a reasonable latency. There are two main limiting factors against the high data rate communication: Fading and interference. While fading reduces the coverage area and reliability of any point-to-point connection, e.g. mobile user to base station (BS), interference puts constraint on reusability of spectral resources (frequency slots, time etc.) in space, hence restricting the overall spectral efficiency expressed in bits/sec/Hz/BS. Joint processing of the signals received by multiple BS’s, which enables the exploitation of correlation between received signals, is one of the methods to reduce the detrimental effect of interference. In this talk, I will present coding schemes that enable the joint processing such that one of them is based on cooperation between BS’s and the other two are based on employment of cloud random access networks (C-RAN). I will also present lower bounds on the achievable degrees of freedom (DoF).