Archive(s) des séminaires du groupe ComNum

Le 06/06/2019 [ComNum seminar] Wireless Networks via the Cloud: An Information Theoretic View

Auteur(s) & Affilliation(s) du séminaire :

Prof. Shlomo Shamai

Présentation du séminaire :


at 11am in Amphi Grenat


Abstract:  


Cloud based wireless networks named also as Cloud Radio Access Networks (C-RANs) emerge as appealing architectures for next-generation wireless/cellular systems whereby the processing/encoding/decoding is migrated from the local base-stations/radio units (RUs) to a control/central units (CU) in the "cloud". The network operates via fronthaul digital links connecting the CU and the RUs (operating as relays). The uplink and downlink are examined from a network information theoretic perspective, with emphasis of simple oblivious processing at the RUs, which is attractive also from the practical point of view. The analytic approach, applied to simple wireless/cellular models illustrates the considerable performance gains associated with advanced network information theoretically inspired techniques, carrying also practical implications. An outlook, pointing out interesting theoretical directions, referring also to Fog radio access networks (F-RAN), concludes the presentation.

Biography:

Professor Shlomo Shamai is a distinguished professor at the Department of Electrical engineering at the Technion ? Israel Institute of Technology. Professor Shamai is an information theorist and winner of the 2011 Shannon Award. Shlomo Shamai received the B.Sc., M.Sc., and Ph.D. degrees in electrical engineering from the Technion, in 1975, 1981 and 1986 respectively. During 1975-1985 he was with the Israeli Communications Research Labs. Since 1986 he is with the Department of Electrical Engineering at the Technion?Israel Institute of Technology, where he is now the William Fondiller Professor of Telecommunications. His research areas cover a wide spectrum of topics in information theory and statistical communications. Prof. Shamai is an IEEE Fellow and a member of the International Union of Radio Science (URSI

Contact(s) :

Le 23/05/2019 [ComNum PhD's seminar] Compression with random access

Auteur(s) & Affilliation(s) du séminaire :

Shashank Vatedka

Présentation du séminaire :

at 1pm in A301


Abstract:

In compressing large files, it is often desirable to be able to efficiently recover and update short fragments of data. Classical compression schemes such as Lempel-Ziv are optimal in terms of compression rate but local recovery of even one bit requires us to decompress the entire codeword. Let us define the local decodability of a compression scheme to be the minimum number of codeword bits that need to be read in order to recover a single bit of the message/raw data. Similarly, let the update efficiency be the minimum number of codeword bits that need to be modified in order to update a single message bit. For a space efficient (entropy-achieving) compression scheme, what is the smallest local decodability and update efficiency that we can achieve? Can we simultaneously achieve small local decodability and update efficiency, or is there a tradeoff between the two? All this, and more (including a new compression algorithm) will be discussed in the talk.

Contact(s) :

Ciblat Philippe

Le 09/05/2019 [Comnum PhD's seminar] Neural network approaches to point lattice decoding

Auteur(s) & Affilliation(s) du séminaire :

Corlay Vincent

Présentation du séminaire :

at 1pm in A301

Abstract:

Point lattices are the Euclidean counterparts of error correcting codes. One of the most challenging lattice problem is the decoding problem: given a point in the Euclidean space, find the closest lattice point. As it is a classification problem, it seems natural to tackle this problem via neural networks. Indeed, following the deep learning revolution, started in 2012, neural networks have been commonly used for classification tasks in many fields. The work presented in this talk is a first theoretical step towards a better understanding of the neural network architecture that should be more suitable for the decoding problem.

Namely, we first establish a duality between the decoding problem and a piecewise linear function with a number of affine pieces growing exponentially in the space dimension.
We then prove that deep neural networks need a number of neurons growing only polynomially in the space dimension to compute some of these functions, whereas this complexity is exponential for any shallow neural network.
This has two consequences:
1 – In some cases, deep neural networks can be used to implement low complexity decoding algorithms.
2 - Euclidean lattices provide good examples to illustrate the benefit of depth over width in representing functions with neural networks.

Contact(s) :

Ciblat Philippe

Le 22/02/2019 [ComNum PhD's seminar] Cache Freshness Updates with age-related non-uniform Update Duration.

Auteur(s) & Affilliation(s) du séminaire :

Haoyue TANG

Présentation du séminaire :

at 1.30pm in A301

Abstract

Caching systems have been employed to reduce backhaul traffic and improve data availability. In many of caching applications like sensor networks and satellite broadcasting, data at the remote sources are dynamic and the changes can not be pushed to the cache immediately due to communication resource constraint. When a user requests data from the cache, the data from the cache may be out-dated. To guarantee data freshness from the perspective of the caching users, optimal caching update strategies need to be designed. Previous results reveal that when the update duration for each file is a constant and identical among the files, the optimal inter update interval for each file should be proportional to the square root of its popularity. 


In this talk, we address the following problem: when the update duration for each file is non-uniform and age related, how to design file update strategies such that the overall freshness of the caching systems can be guaranteed? We formulate the problem into a relaxed optimization problem and proposed the corresponding scheduling strategy. We found that when the update duration is dynamic and age-related, the AoI optimal inter update interval may be far from the square root law of previous results.

Contact(s) :

Ciblat Philippe

Le 01/02/2019 [ComNum PhD's seminar] A Fundamental Storage-Communication Tradeoff in Distributed Computing with St

Auteur(s) & Affilliation(s) du séminaire :

Yan Qifa

Présentation du séminaire :

at 1pm in A301

Abstract:

Distributed computing has emerged as one of the most important paradigms to speed up large-scale data analysis tasks such as machine learning. A well-known computing framework, MapReduce, can deal with tasks with large data size. In such systems, the tasks are typically decomposed into computing map and reduce functions, where the map functions can be computed by different nodes across the network, and the final outputs are computed by combining the outputs of the map functions with reduce functions. In such systems, one important problem is when there are straggling nodes, who computes too slow, or failed to work due to some reason, how to conduct computations.  

      In this talk, the optimal storage-computation tradeoff is characterized for a MapReduce-like distributed computing system with straggling nodes, where only a part of the nodes can be utilized to compute the desired output functions. The result holds for arbitrary output functions and thus generalizes previous results that restricted to linear functions. Specifically, in this work, we propose a new information-theoretical converse and a new matching coded computing scheme, that we call coded computing for straggling systems (CCS).

 

 

Contact(s) :

Ciblat Philippe

Le 31/01/2019 Séminaire général Comelec: "An information theoretic perspective on web privacy"

Auteur(s) & Affilliation(s) du séminaire :

Elza ERKIP (New York University Tandon School of Engineering)

Présentation du séminaire :

Amphi OPALE, 14H, Télécom ParisTech, 46 rue barrault, Paris 13

When we browse the internet, we expect that our social network identities and web activities will remain private. Unfortunately, in reality, users are constantly tracked on the internet. As web tracking technologies become more sophisticated and pervasive, there is a critical need to understand and quantify web users' privacy risk. In other words, what is the likelihood that users on the internet can be uniquely identified from their online activities?

This talk provides an information theoretic perspective on web privacy by considering two main classes of privacy attacks based on the information they extract about a user. (i) Attributes capture the user's activities on the web and could include its browsing history or its memberships in groups. Attacks that exploit the attributes are called “fingerprinting attacks,” and usually include an active query stage by the attacker. (ii) Relationships capture the user's interactions with other users on the web such as its friendship relations on a certain social network. Attacks that exploit the relationships are called “social network de-anonymization attacks.” For each class, we show how information theoretic tools can be used to design and analyze privacy attacks and to provide explicit characterization of the associated privacy risks.

Contact(s) :

Wigger Michele

Le 24/01/2019 [ComNum PhD's seminar] About the Entropic uncertainty principle

Auteur(s) & Affilliation(s) du séminaire :

Asgari Fatemeh

Présentation du séminaire :

in A301 at 1pm

Abstract:

The entropy power inequality (EPI), first introduced by Shannon (1948), states that the entropy power of sum of two independent random variables X and Y is not less than the sum of the entropy powers of X and Y.  It finds many applications and has received several proofs and generalizations, in particular for dependent variables X and Y. We propose new conditions under which the EPI holds for dependent summands and discuss the implications of this result. This is a joint work with Mohammad Hossein Alamatsaz.

The well-known uncertainty principle used in physics in based on Kennard-Weyl's inequality (1928) and was strengthened in terms of Shannon's entropy, leading to the entropic uncertainty principle (EUP). The EUP was conjectured by Hirschman (1957) and finally proved by Beckner (1975) based on Babenko's inequality with optimal constants. Beckner's proof of Babenko's inequality is extremely difficult and the resulting derivation of the EUP is indirect (via Renyi entropies). A simple proof was recently published in Annals of Physics (2015) which turns out to be very questionable. We give a simple proof of a weaker, "local" EUP using the Hermite decomposition. This is a joint work with Olivier Rioul.

Contact(s) :

Ciblat Philippe

Le 18/10/2018 [ComNum's PhD seminar] Challenge codes and Physically Unclonable Functions

Auteur(s) & Affilliation(s) du séminaire :

Schaub Alexander

Présentation du séminaire :

in A301 at 1.30pm

Abstract :

Physically unclonable functions (PUF) are used in various applications
requiring robust authentication. These systems exploit unpredictable
process variations in electronic circuits. These process variations
uniquely identify the produced hardware, which exhibit distinct
properties in terms, for example, of delay propagations inside the
circuit. By measuring and exploiting these properties, one can determine
a "fingerprint" of the circuit, which can not be physically replicated.
This fingerprint can then be used, for instance, to produce a
cryptographic key. The advantage is that this key does not need to be
explicitly stored, which reduces the security risk. Other applications
include challenge-response protocols, where the responses are determined
from the physical properties of the circuit.

For a given type of PUF, the Loop-PUF, these delay propagation
differences can be modeled by n Gaussian random variables. A challenge
corresponds to a vector of +/- 1 values, and an identifier bit is the
sign of the signed sum of the Gaussian realizations, with signs
corresponding to those of the challenge vector. We try to adress the
following question: what is the joint entropy of these sign bits ?

The exact calculation of the maximum entropy, when considering the set
of all possible challenges, can be carried out only for very small
values of n. We provide a combinatorial extension that provides the
exact values for n = 3 and 4. For n greater or equal to 5, the method
soon becomes intractable and one has recourse to numerical computations.
The value of the maximum entropy can be estimated reliably by defining
equivalence classes of challenges corresponding to the same value of
joint probabilities. This method was found to be numerically tractable
for values of n up to 7. Asymptotic expressions for the max-entropy are
found using the theory of threshold boolean functions.

Contact(s) :

Ciblat Philippe

Le 28/09/2018 [ComNum seminar] Cloud Radio Access Networks, Distributed Information Bottleneck, and more

Auteur(s) & Affilliation(s) du séminaire :

Shlomo Shamai (Technion, Israel)

Présentation du séminaire :


at 11am in F900,


Abstract:

We consider transmission over a cloud radio access network (CRAN) focusing
on the framework of oblivious processing at the relay nodes (radio units),
i.e., the relays are not cognizant of the users' codebooks.
This approach is motivated by future wireless communications
(5G and beyond) and the theoretical results connect to a variety
of different information theoretic models and problems.
First it is shown that relaying a-la Cover-El Gamal, i.e.,
compress-and-forward with joint decompression and
decoding, which reflects 'noisy network coding,' is optimal.
The penalty of obliviousness is also demonstrated to be
at most a constant gap, when compared to cut-set bounds.
Naturally, due to the oblivious (nomadic) constraint the CRAN problem
intimately connects to Chief Executive Officer (CEO) source(s) coding
under a logarithmic loss distortion measure.
Furthermore, we identify and elaborate on some interesting
connections with the distributed information bottleneck model for which we
characterize optimal tradeoffs between rates (i.e., complexity) and
information (i.e., accuracy) in the discrete and vector Gaussian frameworks.
Further connections to 'information combining' and 'common reconstruction'
are also pointed out. In the concluding outlook, some interesting problems
are mentioned such as the characterization of the optimal input distributions
under users' power limitations and rate-constrained compression at the
relay nodes,

 


---------------------------------------------------------------------------
Joint work with: I.E. Aguerri (Paris Research Center, Huawei France)
A. Zaidi (Universite Paris-Est, Paris) and G. Caire (USC-LA and TUB, Berlin)
The research is supported by the European Union's Horizon 2020 Research And
Innovation Programme: no. 694630.

Contact(s) :

Wigger Michele

Le 13/09/2018 Seminaire general Comelec: "Spatial Data Focusing"

Auteur(s) & Affilliation(s) du séminaire :

Julien Sarrazin

TL2E - Sorbonne Université / Laboratoire d'Electronique & Electromagnétisme (L2E)

Présentation du séminaire :

Amphi B310, 14H, Télécom ParisTech, 46 rue barrault, Paris 13

"Spatial Data Focusing: an alternative to Beamforming for geocasting scenarios"

The capability of an antenna to focus radiated signals into a well defined direction is fundamentally limited by its size (the smaller, the less directive), as the result of diffraction or, equivalently, owing to the properties of the Fourier transform. This applies to single antennas as well as to arrays of multiple antennas.

In this seminar, a Spatial Data Focusing technique is introduced as an alternative scenario to overcome antenna array's beamwidth limitations due to the finite aperture size. The proposed approach aims to focus the transmitted data rather than the transmitted power. This scheme enables wireless broadcast of information to specific spatial locations, using fewer antenna elements compared to classical beamforming techniques. Different configurations will be discussed to implement this scheme and it will be shown that focusing the data is spatially more selective than focusing the power.

Contact(s) :

Sibille Alain

Document(s) :

Le 05/07/2018 [ComNum seminar] Hybrid Beamforming and Detection Algorithms for 5G Networks

Auteur(s) & Affilliation(s) du séminaire :

Oussama Damen

Présentation du séminaire :

in A301 at 10am


Abstract

5G Systems are expected to have a 1000-fold increase in mobile data traffic compared to 2015, largely increased mobile devices per unit area, and decreased latency, especially for M2M/D2D communications. Among the front-runner technologies to achieve such lofty aims are multi-tier heterogenous networks that combine outdoor RF macrocells and indoor gigabit (wide-band) small-cells, and massive MIMO. These technologies include very high carrier frequencies with massive bandwidths, extreme base station and device densities and unprecedented numbers of antennas.
In this talk, I will go over the work done in my group at the University of Waterloo in signal processing and algorithms design and analysis for 5G networks. The talk is divided into three main parts. First, I will briefly talk about hybrid beamforming for single and multi-user massive MIMO.
Second, I will present a brief analysis of some HetNets from the physical layer perspectives (i.e., outage probability and diversity/degrees of freedom analysis). Lastly, I will discuss techniques to simplify the detection (combined with decoding) algorithms at the receiver side.
Massive MIMO are used primarily to combat the high absorption rate of the mm-wave channels; however, this comes with a price: namely a large number of (expensive) RF chains (i.e., DAC and PA) as well as a huge overhead for training and channel knowledge feedback. Hybrid analog (RF) and digital (baseband) beamforming allows one to reduce the required number of RF chains as well as to have a limited feedback. The appropriate choice of beamforming codebooks combined with the selection algorithms can allow such simplification with a limited loss compared to the optimal solution.
Conventionally, HetNets are adopted to broaden the network coverage and alleviate the near-far problem in cellular networks.
In the literature, the cooperative benefits of HetNets are limited to load balancing and traffic offloading, or sharing resources. I will talk about the cooperation benefits in HetNets in terms of outage probability and diversity order. Finally, I will talk about how to reduce the complexity of near-optimal detection algorithms at the receiver side using established techniques such as lattice reduction and conditional optimization.

Bio: Mohamed Oussama Damen is an Electrical and Computer Engineering Professor at the University of Waterloo.
Professor Damen has an extensive background in research positions at multiple academic institutions including École Nationale Supérieure des Télécommunications in Paris, France; the University of Minnesota and the University of Alberta. In June of 2004, he joined the University of Waterloo, where he then became the Nortel Networks Associate Chair in Advanced Telecommunications from April 2005 to April 2010.
His current research interests include coding theory (particularly regarding lattices, coding and decoding algorithms), cross-layer optimization, multiple-input multiple-output and space time communications, multiuser detection, and wireless communications.
Professor Damen has received several awards including the University of Waterloo ECE Research Excellence Award in 2007, Early Researcher Award in the Province of Ontario for 2007 to 2010 and the Junior Research Fellowship from the French Research Ministry in 1996 to 1999. Professor Damen has published numerous articles and journals with, and is currently a senior level member of, IEEE.

Contact(s) :

Rekaya - Ben Othman Ghaya

Le 22/06/2018 [ComNum PhD seminar] Interference Management in Sectored Cellular Systems for Uplink

Auteur(s) & Affilliation(s) du séminaire :

Gelincik Samet

Présentation du séminaire :

at 2.30pm in A301

 

Abstract:

The main goal of future cellular communication systems is to handle high data rate requirement of users with a reasonable latency. There are two main limiting factors against the high data rate communication: Fading and interference. While fading reduces the coverage area and reliability of any point-to-point connection, e.g. mobile user to base station (BS), interference puts constraint on reusability of spectral resources (frequency slots, time etc.) in space, hence restricting the overall spectral efficiency expressed in bits/sec/Hz/BS. Joint processing of the signals received by multiple BS’s, which enables the exploitation of correlation between received signals, is one of the methods to reduce the detrimental effect of interference. In this talk, I will present coding schemes that enable the joint processing such that one of them is based on cooperation between BS’s and the other two are based on employment of cloud random access networks (C-RAN). I will also present lower bounds on the achievable degrees of freedom (DoF).

Contact(s) :

Ciblat Philippe

Image Retour haut de page