**at 1.30pm in A301**

**Abstract**

Caching systems have been employed to reduce backhaul traffic and improve data availability. In many of caching applications like sensor networks and satellite broadcasting, data at the remote sources are dynamic and the changes can not be pushed to the cache immediately due to communication resource constraint. When a user requests data from the cache, the data from the cache may be out-dated. To guarantee data freshness from the perspective of the caching users, optimal caching update strategies need to be designed. Previous results reveal that when the update duration for each file is a constant and identical among the files, the optimal inter update interval for each file should be proportional to the square root of its popularity.

In this talk, we address the following problem: when the update duration for each file is non-uniform and age related, how to design file update strategies such that the overall freshness of the caching systems can be guaranteed? We formulate the problem into a relaxed optimization problem and proposed the corresponding scheduling strategy. We found that when the update duration is dynamic and age-related, the AoI optimal inter update interval may be far from the square root law of previous results.

**at 1pm in A301**

**Abstract:**

Distributed computing has emerged as one of the most important paradigms to speed up large-scale data analysis tasks such as machine learning. A well-known computing framework, MapReduce, can deal with tasks with large data size. In such systems, the tasks are typically decomposed into computing map and reduce functions, where the map functions can be computed by different nodes across the network, and the final outputs are computed by combining the outputs of the map functions with reduce functions. In such systems, one important problem is when there are straggling nodes, who computes too slow, or failed to work due to some reason, how to conduct computations.

In this talk, the optimal storage-computation tradeoff is characterized for a MapReduce-like distributed computing system with straggling nodes, where only a part of the nodes can be utilized to compute the desired output functions. The result holds for arbitrary output functions and thus generalizes previous results that restricted to linear functions. Specifically, in this work, we propose a new information-theoretical converse and a new matching coded computing scheme, that we call coded computing for straggling systems (CCS).

**Amphi OPALE, 14H, Télécom ParisTech, 46 rue barrault, Paris 13**

When we browse the internet, we expect that our social network identities and web activities will remain private. Unfortunately, in reality, users are constantly tracked on the internet. As web tracking technologies become more sophisticated and pervasive, there is a critical need to understand and quantify web users' privacy risk. In other words, what is the likelihood that users on the internet can be uniquely identified from their online activities?

This talk provides an information theoretic perspective on web privacy by considering two main classes of privacy attacks based on the information they extract about a user. (i) Attributes capture the user's activities on the web and could include its browsing history or its memberships in groups. Attacks that exploit the attributes are called “fingerprinting attacks,” and usually include an active query stage by the attacker. (ii) Relationships capture the user's interactions with other users on the web such as its friendship relations on a certain social network. Attacks that exploit the relationships are called “social network de-anonymization attacks.” For each class, we show how information theoretic tools can be used to design and analyze privacy attacks and to provide explicit characterization of the associated privacy risks.

**in A301 at 1pm **

**Abstract:**

The entropy power inequality (EPI), first introduced by Shannon (1948), states that the entropy power of sum of two independent random variables X and Y is not less than the sum of the entropy powers of X and Y. It finds many applications and has received several proofs and generalizations, in particular for dependent variables X and Y. We propose new conditions under which the EPI holds for dependent summands and discuss the implications of this result. This is a joint work with Mohammad Hossein Alamatsaz.

The well-known uncertainty principle used in physics in based on Kennard-Weyl's inequality (1928) and was strengthened in terms of Shannon's entropy, leading to the entropic uncertainty principle (EUP). The EUP was conjectured by Hirschman (1957) and finally proved by Beckner (1975) based on Babenko's inequality with optimal constants. Beckner's proof of Babenko's inequality is extremely difficult and the resulting derivation of the EUP is indirect (via Renyi entropies). A simple proof was recently published in Annals of Physics (2015) which turns out to be very questionable. We give a simple proof of a weaker, "local" EUP using the Hermite decomposition. This is a joint work with Olivier Rioul.

**in A301 at 1.30pm **

**Abstract : **

Physically unclonable functions (PUF) are used in various applications

requiring robust authentication. These systems exploit unpredictable

process variations in electronic circuits. These process variations

uniquely identify the produced hardware, which exhibit distinct

properties in terms, for example, of delay propagations inside the

circuit. By measuring and exploiting these properties, one can determine

a "fingerprint" of the circuit, which can not be physically replicated.

This fingerprint can then be used, for instance, to produce a

cryptographic key. The advantage is that this key does not need to be

explicitly stored, which reduces the security risk. Other applications

include challenge-response protocols, where the responses are determined

from the physical properties of the circuit.

For a given type of PUF, the Loop-PUF, these delay propagation

differences can be modeled by n Gaussian random variables. A challenge

corresponds to a vector of +/- 1 values, and an identifier bit is the

sign of the signed sum of the Gaussian realizations, with signs

corresponding to those of the challenge vector. We try to adress the

following question: what is the joint entropy of these sign bits ?

The exact calculation of the maximum entropy, when considering the set

of all possible challenges, can be carried out only for very small

values of n. We provide a combinatorial extension that provides the

exact values for n = 3 and 4. For n greater or equal to 5, the method

soon becomes intractable and one has recourse to numerical computations.

The value of the maximum entropy can be estimated reliably by defining

equivalence classes of challenges corresponding to the same value of

joint probabilities. This method was found to be numerically tractable

for values of n up to 7. Asymptotic expressions for the max-entropy are

found using the theory of threshold boolean functions.

**at 11am in F900,**

Abstract:

We consider transmission over a cloud radio access network (CRAN) focusing

on the framework of oblivious processing at the relay nodes (radio units),

i.e., the relays are not cognizant of the users' codebooks.

This approach is motivated by future wireless communications

(5G and beyond) and the theoretical results connect to a variety

of different information theoretic models and problems.

First it is shown that relaying a-la Cover-El Gamal, i.e.,

compress-and-forward with joint decompression and

decoding, which reflects 'noisy network coding,' is optimal.

The penalty of obliviousness is also demonstrated to be

at most a constant gap, when compared to cut-set bounds.

Naturally, due to the oblivious (nomadic) constraint the CRAN problem

intimately connects to Chief Executive Officer (CEO) source(s) coding

under a logarithmic loss distortion measure.

Furthermore, we identify and elaborate on some interesting

connections with the distributed information bottleneck model for which we

characterize optimal tradeoffs between rates (i.e., complexity) and

information (i.e., accuracy) in the discrete and vector Gaussian frameworks.

Further connections to 'information combining' and 'common reconstruction'

are also pointed out. In the concluding outlook, some interesting problems

are mentioned such as the characterization of the optimal input distributions

under users' power limitations and rate-constrained compression at the

relay nodes,

---------------------------------------------------------------------------

Joint work with: I.E. Aguerri (Paris Research Center, Huawei France)

A. Zaidi (Universite Paris-Est, Paris) and G. Caire (USC-LA and TUB, Berlin)

The research is supported by the European Union's Horizon 2020 Research And

Innovation Programme: no. 694630.

TL2E - Sorbonne Université / Laboratoire d'Electronique & Electromagnétisme (L2E)

**Amphi B310, 14H, Télécom ParisTech, 46 rue barrault, Paris 13**

**"Spatial Data Focusing: an alternative to Beamforming for geocasting scenarios"**

The capability of an antenna to focus radiated signals into a well defined direction is fundamentally limited by its size (the smaller, the less directive), as the result of diffraction or, equivalently, owing to the properties of the Fourier transform. This applies to single antennas as well as to arrays of multiple antennas.

In this seminar, a Spatial Data Focusing technique is introduced as an alternative scenario to overcome antenna array's beamwidth limitations due to the finite aperture size. The proposed approach aims to focus the transmitted data rather than the transmitted power. This scheme enables wireless broadcast of information to specific spatial locations, using fewer antenna elements compared to classical beamforming techniques. Different configurations will be discussed to implement this scheme and it will be shown that focusing the data is spatially more selective than focusing the power.

**in A301 at 10am**

**Abstract**

5G Systems are expected to have a 1000-fold increase in mobile data traffic compared to 2015, largely increased mobile devices per unit area, and decreased latency, especially for M2M/D2D communications. Among the front-runner technologies to achieve such lofty aims are multi-tier heterogenous networks that combine outdoor RF macrocells and indoor gigabit (wide-band) small-cells, and massive MIMO. These technologies include very high carrier frequencies with massive bandwidths, extreme base station and device densities and unprecedented numbers of antennas.

In this talk, I will go over the work done in my group at the University of Waterloo in signal processing and algorithms design and analysis for 5G networks. The talk is divided into three main parts. First, I will briefly talk about hybrid beamforming for single and multi-user massive MIMO.

Second, I will present a brief analysis of some HetNets from the physical layer perspectives (i.e., outage probability and diversity/degrees of freedom analysis). Lastly, I will discuss techniques to simplify the detection (combined with decoding) algorithms at the receiver side.

Massive MIMO are used primarily to combat the high absorption rate of the mm-wave channels; however, this comes with a price: namely a large number of (expensive) RF chains (i.e., DAC and PA) as well as a huge overhead for training and channel knowledge feedback. Hybrid analog (RF) and digital (baseband) beamforming allows one to reduce the required number of RF chains as well as to have a limited feedback. The appropriate choice of beamforming codebooks combined with the selection algorithms can allow such simplification with a limited loss compared to the optimal solution.

Conventionally, HetNets are adopted to broaden the network coverage and alleviate the near-far problem in cellular networks.

In the literature, the cooperative benefits of HetNets are limited to load balancing and traffic offloading, or sharing resources. I will talk about the cooperation benefits in HetNets in terms of outage probability and diversity order. Finally, I will talk about how to reduce the complexity of near-optimal detection algorithms at the receiver side using established techniques such as lattice reduction and conditional optimization. **Bio:** Mohamed Oussama Damen is an Electrical and Computer Engineering Professor at the University of Waterloo.

Professor Damen has an extensive background in research positions at multiple academic institutions including École Nationale Supérieure des Télécommunications in Paris, France; the University of Minnesota and the University of Alberta. In June of 2004, he joined the University of Waterloo, where he then became the Nortel Networks Associate Chair in Advanced Telecommunications from April 2005 to April 2010.

His current research interests include coding theory (particularly regarding lattices, coding and decoding algorithms), cross-layer optimization, multiple-input multiple-output and space time communications, multiuser detection, and wireless communications.

Professor Damen has received several awards including the University of Waterloo ECE Research Excellence Award in 2007, Early Researcher Award in the Province of Ontario for 2007 to 2010 and the Junior Research Fellowship from the French Research Ministry in 1996 to 1999. Professor Damen has published numerous articles and journals with, and is currently a senior level member of, IEEE.

**at 2.30pm in A301 **

**Abstract:**

The main goal of future cellular communication systems is to handle high data rate requirement of users with a reasonable latency. There are two main limiting factors against the high data rate communication: Fading and interference. While fading reduces the coverage area and reliability of any point-to-point connection, e.g. mobile user to base station (BS), interference puts constraint on reusability of spectral resources (frequency slots, time etc.) in space, hence restricting the overall spectral efficiency expressed in bits/sec/Hz/BS. Joint processing of the signals received by multiple BS’s, which enables the exploitation of correlation between received signals, is one of the methods to reduce the detrimental effect of interference. In this talk, I will present coding schemes that enable the joint processing such that one of them is based on cooperation between BS’s and the other two are based on employment of cloud random access networks (C-RAN). I will also present lower bounds on the achievable degrees of freedom (DoF).

**at 1pm in A301**

**Abstract:**

As the wireless mobile communications are witnessing unprecedented growth, the future 5G networks will have to satisfy the demands on high-volume data traffic and at the same time strong requirements on the latency, error rates and energy consumption. Therefore, improving the computing capabilities of mobile devices and extend their battery lives are rising among the key challenges for next generation wireless network. In this work, energy harvesting and mobile cloud computing are exploited as two promising technologies to tackle these challenges by designing optimal policies for resource scheduling and computation offloading.

**at 11am in Amphi Opale**

**Abstract**

This presentation aims at giving an overview of 5G from a PHY/MAC perspective (with emphasis on the differences with LTE), notably, in terms of enablers and design principles (forward compatibility). The presentation follows what has been standardized for 3GPP New Radio Rel. 15.

The outline of the presentation is as follows:

**Introduction **

· Background (standardization process, requirements/levers, LTE vs 5G)

**Part I: 5G PHY/MAC Enablers**

· Physical channels, physical reference signals

· Frame structure/numerology

· Waveform

· Massive MIMO

· Synchronization

· Beam management

**Part II: 5G Design principles**

· Forward compatibility

· Lean design

· Stay in the box

· Avoid strict timing relations

· TDD and FDD design

· Low latency

**Conclusion**

**at 10am in A301**

**Abstract: **One of the paramount challenges of this century is that of understanding complex, dynamic, large-scale networks. Such high-dimensional networks, including social, financial, and biological networks, cover the planet and dominate modern life. In this talk, we propose novel approaches to inference in such networks, for both active (interventional) and passive (observational) learning scenarios. We highlight how timing could be utilized as a degree of freedom that provides rich information about the dynamics. This information allows resolving direction of causation even when only a subset of the nodes is observed (latent setting). In the presence of large data, we propose algorithms that identify optimal or near-optimal approximations to the topology of the network.

Biography: Negar Kiyavash is Willett Faculty Scholar at the University of Illinois and a joint Associate Professor of Industrial and Enterprise Engineering (IE) and Electrical and Computer Engineering (ECE). She is the director of Advance Data Analytics Program in IE and is further affiliated with the Coordinated Science Laboratory (CSL) and the Information Trust Institute. She received her Ph.D. degree in ECE from the University of Illinois at Urbana-Champaign in 2006. Her research interests are in design and analysis of algorithms for network inference and security. She is a recipient of NSF CAREER and AFOSR YIP awards and the Illinois College of Engineering Dean's Award for Excellence in Research.

at 11am in A301

**Abstract:**

We consider the input-constrained feedback capacity of certain binary-input

memoryless channels. Here, the channel inputs are constrained to be binary sequences

in which consecutive 1s are disallowed, and the channel output at the end of one

transmission is fed back to the transmitter ahead of the next transmission. It is

known that the capacity computation for such channels can be formulated as an

average-reward dynamic program (DP). In the case of the binary erasure channel,

we obtain an exact expression for the capacity by explicitly solving the Bellman

equation associated with the DP formulation. Furthermore, the optimal policy of the

DP leads to a simple and elegant zero-error coding scheme that achieves capacity.

More generally, for any binary-input, binary-output channel, we are able to obtain

an explicit expression for feedback capacity under the no-consecutive-1s input

constraint by solving a Bellman equation. In particular, our results apply to the

binary symmetric channel and the Z channel. The optimal policy for the DP yields

a capacity-achieving coding scheme based on the posterior matching principle.

This is joint work with Oron Sabag and Haim Permuter (Ben-Gurion University, Israel).

Biography:

Navin Kashyap received the B.Tech. degree in Electrical Engineering from the Indian Institute of Technology, Bombay, in 1995, the M.S. degree in Electrical Engineering from the University of Missouri-Rolla in 1997, and the M.S. degree in Mathematics and the Ph.D. degree in Electrical Engineering from the University of Michigan, Ann Arbor, in 2001. From November 2001 to November 2003, he was a postdoctoral research associate at the University of California, San Diego. From 2004 to 2010, he was on the faculty of the Department of Mathematics and Statistics at Queen's University, Kingston, Ontario, Canada. In January 2011, he joined the Department of Electrical Communication Engineering at the Indian Institute of Science, where is currently a Professor. His research interests lie primarily in the application of combinatorial and probabilistic methods in information and coding theory. Prof. Kashyap served on the editorial board of the IEEE Transactions on Information Theory during the period 2009-2014. He is at present an Associate Editor for the SIAM Journal on Discrete Mathematics. He has been appointed as a Distinguished Lecturer of the IEEE Information Theory Society for 2017-2018.

**at 2pm in Amphi Opale**

**Abstract: **

Users of cloud systems demand that their data be reliably stored and quickly accessible. Cloud providers today strive to meet these demands through over-provisioning: keeping processors ready to go at all times and replicating data over multiple servers. Special erasure codes have been designed and adopted in practice as a more storage-efficient way to provide reliability. We will show how coding reduces download time of large files, in addition to providing reliability against disk failures. We will introduce a fork-join queuing framework to model multiple users requesting their data simultaneously, and demonstrate the trade-off between the download time and the amount of storage space. We will explain how for the same total storage used, coding exploits the diversity and parallelism in the system better than today's replication schemes, and hence gives faster download. At the end, we will mention several problems that arise in distributed computing systems when some servers are straggling in completing their tasks, and the cloud data is hot, large, changing, and expanding.

**Bio: **

Emina Soljanin is a Professor at Rutgers University. Before moving to Rutgers in January 2016, she was a (Distinguished) Member of Technical Staff for 21 years in the Mathematical Sciences Research of Bell Labs. She works as an information, coding, and, more recently, queueing theorist. Her interests and expertise are wide. Over the past quarter of the century, she has participated in numerous research and business projects, as diverse as power system optimization, magnetic recording, color space quantization, hybrid ARQ, network coding, data and network security, and quantum information theory and networking. Dr. Soljanin served as the Associate Editor for Coding Techniques, for the IEEE Transactions on Information Theory, on the Information Theory Society Board of Governors, and in various roles on other journal editorial boards and conference program committees. She is a 2017 outstanding alumnus of the Texas A&M School of Engineering, an IEEE Fellow, a 2016/17 Distinguished Lecturer for the IEEE Information Theory Society, and is currently serving as the Second Vice President for the society.

**Amphi B312, Télécom ParisTech, 46 rue barrault, Paris 13**

Il y a moins de 10 ans, Bell Labs mettait au point les réseaux optiques cohérents, le nouveau standard de l’industrie aux milliards d’utilisateurs. Ils nous permettent d’intensifier l’usage que nous faisons de nos smartphones à coût constant ou parfois même à coût réduit, en continuant de vivre avec l’illusion de communications sans fil (et surtout sans fibre).

Derrière cette révolution, une autre se prépare. Il y a fort à parier que les bénéfices de la numérisation introduite dans les systèmes cohérents ne se limiteront pas à soutenir la croissance des flux de données (numériques, bien sûr). Trop d’objets de notre quotidien ont changé de nature grâce aux technologies numériques. Et si les réseaux numériques numérisés devenaient capables d’écouter, de s’analyser, de prendre des décisions seuls… de devenir intelligents? Certains envisagent aussi de profiter de ces avancées technologiques pour inventer des systèmes de communications par laser entre la terre et l’espace à très haut débit.

Dans les systèmes et les réseaux à fibre optique, les technologies cohérentes ont déjà permis de rendre techniquement et économiquement viables quelques rêves que d’aucuns avaient condamnés à l’oubli. Il y a fort à parier que ces rêves devenus réalités ne seront pas les derniers.

laboratoire CITI - INRIA - INSA-Lyon

**Amphi B310, 16H30, Télécom ParisTech, 46 rue barrault, Paris 13**

Supporting IoT communications in future 5G networks is a fundamental challenge. IoT services will rely on the deployment of billions of things communicating various traffic under many different quality of service requirements. Reliability and delay constraints will deeply modify the classical paradigm of communication theory, especially for services generating bursty traffic. For some kind of services often referred to as tactile internet or mission critical communications, where IoT nodes are deployed in smart environments (e.g. smart cities) the communication link is a part of a distributed controlled system and therefore require strong reliability and delay constraints.

After an overview of the main challenges and upcoming technologies, we will focus on scenarios where a very large number of nodes are willing to transmit rare but reliable short information quantities. We address the problem from an information theory perspective, to derive some fundamental limits associated to this scenario.