**in A301 at 10am**

**Abstract**

The data traffic has grown dramatically over the past few years mainly due to video streaming. By utilizing the storage capabilities of the network nodes, caching can greatly improve the spectral efficiency and reduce the transmission latency (delay). In this talk, we develop and analyze caching schemes for two types of nontrivial networks: two-layer relay network and server-aided D2D network. For both networks, our schemes are order optimal and can further reduce the transmission delay compared to the previously known caching schemes. Moreover, for the two-layer 2-relay network, we surprisingly find that if each relay’s caching size equals to 38.2% of full library’s size, increasing the relay’s caching memory CANNOT reduce the transmission delay. For the server-aided D2D network, our scheme achieves a *cooperation gain* and a*parallel gain* by fully exploiting user cooperation and optimally allocating communication loads among the server and users.

**in A301 at 1.15pm**

**Abstract**

Privacy is the problem of ensuring limited leakage of information about sensitive features while

sharing information (utility) about non-private features to legitimate data users. Even as

differential privacy has emerged as a strong desideratum for privacy, there is also an equally strong

need for context-aware utility-guaranteeing approaches in many data sharing settings. This talk

approaches this dual requirement using an information-theoretic approach that includes

operationally motivated leakage measures, design of privacy mechanisms, and verifiable

implementations using generative adversarial models. Specifically, we introduce maximal alpha

leakage as a new class of adversarially motivated tunable leakage measures based on accurately

guessing an arbitrary function of a dataset conditioned on a released dataset. The choice of alpha

determines the specific adversarial action ranging from refining a belief for alpha = 1 to guessing

the best posterior for alpha = ∞, and for these extremal values this measure simplifies to mutual

information (MI) and maximal leakage (MaxL), respectively. The problem of guaranteeing

privacy can then be viewed as one of designing a randomizing mechanism that minimizes

(maximal) alpha leakage subject to utility constraints. We then present bounds on the robustness

of privacy guarantees that can be made when designing mechanisms from a finite number of

samples. Finally, we will briefly present a data-driven approach, generative adversarial privacy

(GAP), to design privacy mechanisms using neural networks. GAP is modeled as a constrained

minimax game between a privatizer (intent on publishing a utility-guaranteeing learning

representation that limits leakage of the sensitive features) and an adversary (intent on learning the

sensitive features). Time permitting, we will briefly discuss the learning-theoretic underpinnings

of GAP as well as connections to the problem of algorithmic fairness.

This work is a result of multiple collaborations: (a) maximal alpha leakage with J. Liao (ASU),

O. Kosut (ASU), and F. P. Calmon (Harvard); (b) robust mechanism design with M. Diaz

(ASU), H. Wang (Harvard), and F. P. Calmon (Harvard); and (c) GAP with C. Huang (ASU), P.

Kairouz (Google), X. Chen (Stanford), and R. Rajagopal (Stanford).

**in A301 at 2pm ****Abstract:**

Over-complete bases (frames) play the role of analog codes for compression (compressed sensing) and redundant signaling (erasure correction). The eigenvalue distribution of a random subset of the frame vectors determines the code performance with noisy observations. For the highly symmetric class of equiangular tight frames (ETF), we show that if we keep the frame aspect ratio and the selection probability of a frame vector fixed, then the eigenvalue distribution converges asymptotically to the MANOVA distribution. We also show that the MANOVA distribution gives a lower bound on the random-subset d'th moment, for d=1,2,3 and 4, for all frames of a fixed aspect ratio. We conjecture that MANOVA also gives an asymptotic lower bound for the "inverse moment" (d = -1), i.e., for the noise enhancement under subset inversion. Thus, ETF is the most robust analog code for noisy compressed sensing and redundant signaling. Joint work with Marina Haikin and Matan Gavish.**Bio:**

Ram Zamir was born in Ramat-Gan, Israel in 1961. He received the B.Sc., M.Sc. (summa cum laude) and D.Sc. (with distinction) degrees from Tel-Aviv University, Israel, in 1983, 1991, and 1994, respectively, all in electrical engineering. In the years 1994 - 1996 he spent a post-doctoral period at Cornell University, Ithaca, NY, and at the University of California, Santa Barbara. In 2002 he spent a Sabbatical year at MIT, and in 2008 and 2009 short Sabbaticals at ETH and MIT. Since 1996 he has been with the department of Elect. Eng. - Systems at Tel Aviv University. His research interests include information theory (in particular: lattice codes for multi-terminal problems), source coding, communications and statistical signal processing. His book "Lattice coding for signals and networks" was published in 2014.

**in A301 at 2pm**

**Abstract : ** Wireless communication networks have to accommodate different types of data traffics with different latency constraints. In particular, delay-sensitive video-applications represent an increasing portion of data traffic. On the other hand, modern networks can increase data rates by means of cooperation between terminals or with helper relays. However, cooperation typically introduces additional communication delays, and is thus not applicable to delay-sensitive applications. In this work, we analyze the rates of communication that can be attained over an interference network with either transmitter or receiver cooperation, and where parts of the messages cannot profit from this cooperation because they are subject to stringent delay constraints.

**in Amphi Opale at 11am **

**Abstract: **

Deep learning is a powerful tool that carries the promise of revolutionizing current computational frameworks. In this talk, I will first present recent work on two problems that lie at the core of making deep learning applicable in a ubiquitous fashion: 1- Selecting a subset of important features that leads to similar or higher learning performance while reducing the training time, as compared to using all available features. Two common categories of feature selection methods are filter methods that rely on intrinsic properties of the dataset, and wrapper methods that rely on evaluating the performance of a machine learning model that is retrained using every candidate combination of features. In general, filter methods are computationally efficient while wrapper methods deliver better learning performance. We propose an efficient deep-learning based wrapper method that exploits the transferability property of deep neural network architectures to avoid excessive retraining, as well as autoencoders to distill important dataset intrinsic properties, 2- Adversarial deep learning attacks that introduce small perturbations to the input data in sensitive directions, which negatively impacts the learning performance. Here, we propose novel strategies to preprocess the input data for defending adversarial attacks. Our strategies rely on properties of denoising autoencoders that are demonstrated to be powerful tools for extracting robust representations. In the second part of the talk, I will present our findings on applying deep learning in wireless communications and networking and discuss why we believe that it’s a perfectly fit tool for various tasks enabling autonomous communication systems and resource management. This discussion will particularly highlight Purdue’s successful journey in the DARPA Spectrum Collaboration Challenge (SC2). Finally, I will present our initiative at Purdue Engineering for machine learning graduate education, and how in general, Purdue is currently providing an ideal environment for graduate studies in engineering.

**Short Bio: **

Aly El Gamal is an Assistant Professor at the Electrical and Computer Engineering Department of Purdue University. He received his Ph.D. degree in Electrical and Computer Engineering and M.S. degree in Mathematics from the University of Illinois at Urbana-Champaign, in 2014 and 2013, respectively. Prior to that, he received the M.S. degree in Electrical Engineering from Nile University and the B.S. degree in Computer Engineering from Cairo University, in 2009 and 2007, respectively. His research interests include information theory and machine learning.

Dr. El Gamal has received several awards, including the Purdue Seed for Success Award, the Purdue CNSIP Area Seminal Paper Award, the DARPA Spectrum Collaboration Challenge (SC2) Contract Award and Preliminary Events 1 and 2 Team Awards, and the Huawei Innovation Research Program (HIRP) OPEN Award. He is currently leading the Purdue and Texas A&M Team (BAM! Wireless) in DARPA SC2, co-leading the Purdue Engineering initiative for creating a machine learning group and developing graduate courses, and a reviewer for the American Mathematical Society (AMS) Mathematical Reviews.

at 11am in Amphi Grenat

Abstract:

Cloud based wireless networks named also as Cloud Radio Access Networks (C-RANs) emerge as appealing architectures for next-generation wireless/cellular systems whereby the processing/encoding/decoding is migrated from the local base-stations/radio units (RUs) to a control/central units (CU) in the "cloud". The network operates via fronthaul digital links connecting the CU and the RUs (operating as relays). The uplink and downlink are examined from a network information theoretic perspective, with emphasis of simple oblivious processing at the RUs, which is attractive also from the practical point of view. The analytic approach, applied to simple wireless/cellular models illustrates the considerable performance gains associated with advanced network information theoretically inspired techniques, carrying also practical implications. An outlook, pointing out interesting theoretical directions, referring also to Fog radio access networks (F-RAN), concludes the presentation.

Professor Shlomo Shamai is a distinguished professor at the Department of Electrical engineering at the Technion ? Israel Institute of Technology. Professor Shamai is an information theorist and winner of the 2011 Shannon Award. Shlomo Shamai received the B.Sc., M.Sc., and Ph.D. degrees in electrical engineering from the Technion, in 1975, 1981 and 1986 respectively. During 1975-1985 he was with the Israeli Communications Research Labs. Since 1986 he is with the Department of Electrical Engineering at the Technion?Israel Institute of Technology, where he is now the William Fondiller Professor of Telecommunications. His research areas cover a wide spectrum of topics in information theory and statistical communications. Prof. Shamai is an IEEE Fellow and a member of the International Union of Radio Science (URSI

**at 1pm in A301**

**Abstract: **

In compressing large files, it is often desirable to be able to efficiently recover and update short fragments of data. Classical compression schemes such as Lempel-Ziv are optimal in terms of compression rate but local recovery of even one bit requires us to decompress the entire codeword. Let us define the local decodability of a compression scheme to be the minimum number of codeword bits that need to be read in order to recover a single bit of the message/raw data. Similarly, let the update efficiency be the minimum number of codeword bits that need to be modified in order to update a single message bit. For a space efficient (entropy-achieving) compression scheme, what is the smallest local decodability and update efficiency that we can achieve? Can we simultaneously achieve small local decodability and update efficiency, or is there a tradeoff between the two? All this, and more (including a new compression algorithm) will be discussed in the talk.

Nokia Bell Labs France

**Jeudi 23 mai, 14H, Télécom ParisTech, Amphi B310, 46 rue Barrault, Paris 13**

Thanks to the continuous evolution of semiconductor process technologies, tens or hundreds of processors of different types can nowadays be integrated into single chips. These heterogeneous Multi-Processor Systems-on-Chip (MPSoCs) have emerged in the last two decades as an important class of platforms for networking, communications, signal-processing and multimedia among other applications.

To fully exploit their computational power, new programming tools are needed that can assist engineers in achieving high software productivity. An MPSoC compiler is a complex tool-chain that aims at tackling the problems of application modeling, platform description, software parallelization, software distribution and code generation in a single framework. This seminar discusses various aspects of compilers for heterogeneous MPSoC platforms, using the well-known single-core C compiler technology as a baseline for comparison. The seminar is mainly intended as an educational presentation focused on the most important ingredients of the MPSoC compilation process and open research issues.

**at 1pm in A301**

**Abstract: **

Point lattices are the Euclidean counterparts of error correcting codes. One of the most challenging lattice problem is the decoding problem: given a point in the Euclidean space, find the closest lattice point. As it is a classification problem, it seems natural to tackle this problem via neural networks. Indeed, following the deep learning revolution, started in 2012, neural networks have been commonly used for classification tasks in many fields. The work presented in this talk is a first theoretical step towards a better understanding of the neural network architecture that should be more suitable for the decoding problem.

Namely, we first establish a duality between the decoding problem and a piecewise linear function with a number of affine pieces growing exponentially in the space dimension.

We then prove that deep neural networks need a number of neurons growing only polynomially in the space dimension to compute some of these functions, whereas this complexity is exponential for any shallow neural network.

This has two consequences:

1 – In some cases, deep neural networks can be used to implement low complexity decoding algorithms.

2 - Euclidean lattices provide good examples to illustrate the benefit of depth over width in representing functions with neural networks.

CEA-LIST, Saclay

**Jeudi 18 avril, 14H, Télécom ParisTech, Amphi OPALE, 46 rue Barrault, Paris 13**

Artificial Intelligence, and more particularly Deep Learning are enablers of new applications, and allow computing systems to better interact with the real world by extracting information from signals, images and sounds. But they are very demanding in computing power and therefore energy, and pose challenges for being used in embedded systems. This presentation will present some uses cases, tools, approaches and hardware allowing to select neural networks and hardware implementations tuned for embedded applications (Deep Learning at the edge)..

**Salle C48, 14H, Télécom ParisTech, 46 rue barrault, Paris 13**

For the last decades, the exchange rates in telecommunication standards have exponentially increased thanks to major innovations at many levels in radio-transceivers. The 5th generation of mobile standards (5G) targets even higher data rates than previous cellular standards. The increase of data rates is made possible by using larger transmission bandwidths (up to several hundred of MHz) and by using modulation schemes with high spectral efficiency such as OFDM-based modulations. One drawback of these waveforms is that they are very sensitive to any non-linearity in the transceiver and require highly linear transmitters to maintain signal quality and spectral purity. In parallel, one of the other major goals of 5G is to reduce energy consumption or maintain equal energy consumption while providing more services.

In radio-transceivers, there is one component for which these two constraints are in complete opposite design directions: the power amplifier (PA). In the last few years, the digital predistortion technique (DPD) has been developed to improve the PA linearity/efficiency trade-off. This technique consists in distorting the signal before amplification with the inverse characteristic of the PA in order to correct the distortions caused by the PA during amplification. Although this technique has proven itself over the past generations of mobile telephony, it reaches its limits for the 5G because of the very wide bandwidths.

In this talk we will review the main recent and promising solutions to implement efficient predistortion techniques for future wideband transceivers. We will cover the most common approaches for PA modeling and for computing predistorter models and discuss the implementation of the predistortion for RF transmitter highlighting the advantages and drawbacks of different approaches. We will elaborate on promising solutions addressing the bandwidth limitations of digital predistortion.

Dependable and Scalable FPGA Computing Using HDL-based Checkpointing.

Outline:

Motivation

Related work

Main contributions:

Reduced set of state-holding elements

Consistent snapshots in FPGA checkpointing

Checkpointing architecture

Static analysis of HDL source code

Checkpointing flow for the resilience of FPGA computing

Multitasking on FPGA using HDL-based checkpointing

Hardware task migration on heterogeneous FPGA computing using HDL-based checkpointing

Python-based tool for checkpointing insertion.

Conclusion

For full abstract of the research, please find it in the attachment.

**at 1.30pm in A301**

**Abstract**

Caching systems have been employed to reduce backhaul traffic and improve data availability. In many of caching applications like sensor networks and satellite broadcasting, data at the remote sources are dynamic and the changes can not be pushed to the cache immediately due to communication resource constraint. When a user requests data from the cache, the data from the cache may be out-dated. To guarantee data freshness from the perspective of the caching users, optimal caching update strategies need to be designed. Previous results reveal that when the update duration for each file is a constant and identical among the files, the optimal inter update interval for each file should be proportional to the square root of its popularity.

In this talk, we address the following problem: when the update duration for each file is non-uniform and age related, how to design file update strategies such that the overall freshness of the caching systems can be guaranteed? We formulate the problem into a relaxed optimization problem and proposed the corresponding scheduling strategy. We found that when the update duration is dynamic and age-related, the AoI optimal inter update interval may be far from the square root law of previous results.

**Amphi OPALE, 14H, Télécom ParisTech, 46 rue barrault, Paris 13**

Physical modelling of the wireless propagation channel, based on accurate representation of the environment, plays a major role in the assessment and optimization of new 5G network topologies, including both the radio access and the last-mile xhaul segments. It will be shown in this talk how the physical propagation techniques are today employed in the industry to either simulate the millimeter-wave radio links, design ultra-dense networks, or predict the massive MIMO performance in real deployment scenarios. Complementary to on-field trials, the simulation contributes to the elaboration of the mobile operator’s strategies. New infrastructure investments and new antenna’s installations are decided after determination of the best scenario. Several use cases will be illustrated, such as 5G network densification, millimeter-wave mesh backhauling, and FWA (Fixed Wireless Network) planning. Finally, it will be shortly explained how the ray-tracing technique is extended and now applied for investigation of the sub-THz spectrum (promising 6G wireless candidate).

Robot Operating System (ROS) is a well know communication layer for writing robot software.

Its goal is simplifying the hard task to create truly robust, general-purpose and collaborative

robot software. Collaborative means that ROS was designed specifically for letting different

groups, each one expert in a specific field, to collaborate and build upon each other's work,

using a common convention.

In this seminar we will firstly explore the ROS world and how to use it for practical projects.

Then we will have a look to the most used tools that allows the simulation and the verification

of various rover actors, such as sensors, actuators, mechanic components, etc. Modeling (of both

rover's components and the external environment) is strongly used in this context as well as

simulation. In the LabSoc domain, we will see that the described framework can be well binded

with TTool.

**at 1pm in A301**

**Abstract:**

Distributed computing has emerged as one of the most important paradigms to speed up large-scale data analysis tasks such as machine learning. A well-known computing framework, MapReduce, can deal with tasks with large data size. In such systems, the tasks are typically decomposed into computing map and reduce functions, where the map functions can be computed by different nodes across the network, and the final outputs are computed by combining the outputs of the map functions with reduce functions. In such systems, one important problem is when there are straggling nodes, who computes too slow, or failed to work due to some reason, how to conduct computations.

In this talk, the optimal storage-computation tradeoff is characterized for a MapReduce-like distributed computing system with straggling nodes, where only a part of the nodes can be utilized to compute the desired output functions. The result holds for arbitrary output functions and thus generalizes previous results that restricted to linear functions. Specifically, in this work, we propose a new information-theoretical converse and a new matching coded computing scheme, that we call coded computing for straggling systems (CCS).

**Amphi OPALE, 14H, Télécom ParisTech, 46 rue barrault, Paris 13**

When we browse the internet, we expect that our social network identities and web activities will remain private. Unfortunately, in reality, users are constantly tracked on the internet. As web tracking technologies become more sophisticated and pervasive, there is a critical need to understand and quantify web users' privacy risk. In other words, what is the likelihood that users on the internet can be uniquely identified from their online activities?

This talk provides an information theoretic perspective on web privacy by considering two main classes of privacy attacks based on the information they extract about a user. (i) Attributes capture the user's activities on the web and could include its browsing history or its memberships in groups. Attacks that exploit the attributes are called “fingerprinting attacks,” and usually include an active query stage by the attacker. (ii) Relationships capture the user's interactions with other users on the web such as its friendship relations on a certain social network. Attacks that exploit the relationships are called “social network de-anonymization attacks.” For each class, we show how information theoretic tools can be used to design and analyze privacy attacks and to provide explicit characterization of the associated privacy risks.

**in A301 at 1pm **

**Abstract:**

The entropy power inequality (EPI), first introduced by Shannon (1948), states that the entropy power of sum of two independent random variables X and Y is not less than the sum of the entropy powers of X and Y. It finds many applications and has received several proofs and generalizations, in particular for dependent variables X and Y. We propose new conditions under which the EPI holds for dependent summands and discuss the implications of this result. This is a joint work with Mohammad Hossein Alamatsaz.

The well-known uncertainty principle used in physics in based on Kennard-Weyl's inequality (1928) and was strengthened in terms of Shannon's entropy, leading to the entropic uncertainty principle (EUP). The EUP was conjectured by Hirschman (1957) and finally proved by Beckner (1975) based on Babenko's inequality with optimal constants. Beckner's proof of Babenko's inequality is extremely difficult and the resulting derivation of the EUP is indirect (via Renyi entropies). A simple proof was recently published in Annals of Physics (2015) which turns out to be very questionable. We give a simple proof of a weaker, "local" EUP using the Hermite decomposition. This is a joint work with Olivier Rioul.

NOKIA Bell Labs

**Amphi OPALE, 14H30, Télécom ParisTech, 46 rue barrault, Paris 13**

Located at the meeting point between telecom operators and over-the-top service providers, metro networks are particularly well-suited for the introduction of radical acceleration of dynamics in the optical networks, leveraging elastic building blocks such as transponders and optical nodes. In this talk, we review innovative solutions which could be used to address some of the challenges of metro networks in the short-medium term (e.g. 2-5 years from now). In particular, we discuss how to mitigate filter impairments thanks to monitoring. We then highlight how machine learning could automate optical networks.

Summary:

Recent wireless communication standards (4G, 5G) need dynamic adjustments of transmission parameters (e.g., modulation, bandwidth), making traditional static scheduling approaches less and less efficient.

To schedule these applications we designed Odyn, a hybrid approach for the scheduling and memory management of periodic dataflow applications on parallel, heterogeneous, Non-Uniform Memory Architecture (NUMA) platforms.

In Odyn, the ordering of tasks and memory allocation are distributed and computed simultaneously at run-time for each Processing Element. We also propose a mechanism to prevent deadlocks caused by attempts to allocate

buffers in size-limited memories.

Fellow of Academy of Engineering Singapore, Fellow of IEEE, President, IEEE Circuits and Systems Society

**Séminaire général Comelec"Energy Efficient System Architecture for Devices in Artificial Intelligence-of-Things"Amphi THEVENIN, 14H, Télécom ParisTech, 46 rue barrault, Paris 13**

Abstract

Internet-of-Things (IoT) is the inter-networking of physical devices, vehicles, buildings, and objects with embedded sensors. It is estimated that by 2020 there will be more than 34 billion IoT devices connected to the Internet. Nearly $6 trillion will be spent on IoT solutions over the next five years. Artificial Intelligence (AI), on the other hand, is intelligence demonstrated by machines that work and react like humans. The combination of AI and IoT gives birth of Artificial Intelligence-of-Things (AIoT). AIoT devices differ from IoT devices that not only they sense, store, transmit data but also analyze and act on data, i.e. the AIoT device makes a decision or perform a task similar to what a person could do. The enabling technology for the AIoT device is embedded AI. This talk will cover the energy efficient system architecture that utilizes the event-driven signal representation. The event-driven signal representation enables data compression at the input source, which greatly reduces the power for data transmission and processing. We will show by examples that the event-driven system significantly improves energy efficiency and is well suited for AIoT applications.

Biography:

Dr. Yong Lian received the B.Sc degree from the College of Economics & Management of Shanghai Jiao Tong University in 1984 and the Ph.D degree from the Department of Electrical Engineering of National University of Singapore (NUS) in 1994. His research interests include low power techniques, continuous-time signal processing, biomedical circuits and systems, and computationally efficient signal processing algorithms. His research has been recognized with more than 20 awards including the 1996 IEEE Circuits and Systems Society's Guillemin-Cauer Award, the 2008 Multimedia Communications Best Paper Award from the IEEE Communications Society, 2011 IES Prestigious Engineering Achievement Award, 2013 Outstanding Contribution Award from Hua Yuan Association and Tan Kah Kee International Society, and the 2015 Design Contest Award in 20th International Symposium on Low Power Electronics and Design. He is also the recipient of the National University of Singapore Annual Teaching Excellence Awards in 2009 and 2010, resectively.

Dr. Lian is the President of the IEEE Circuits and Systems (CAS) Society, a member of IEEE Fellow Committee, a member of IEEE Biomedical Engineering Award Committee, a member of Steering Committee of the IEEE TBioCAS. He was the Editor-in-Chief of the IEEE TCAS-II from 2010 to 2013, Vice President for Publications of CASS, Vice President for Asia Pacific Region, Chair of BioCAS and DSP TC, founder of several conferences including BioCAS, ICGCS, and PrimeAsia.

Screaming Channels: When Electromagnetic Side Channels Meet Radio Transceivers From how far

can we mount electromagnetic side channel attacks? Are we limited to close physical proximity

or can we go much further? During this seminar we will see how mixed signal chips with radios

(e.g., BLE) unintentionally transmit side channel information together with the intended radio

signals, at a considerable distance. We will show how we have discovered this type of leak and

how we can exploit it to break AES at several meters (we have a proof of concept at 10 m in an

anechoic room).

The session will be interactive, with some demos.

LTCI / Comelec / SSH

**Amphi OPALE, 14H, Télécom ParisTech, 46 rue barrault, Paris 13**

**"Attaques par injection de fautes et contre-mesures : passé, présent, futur"**

Les attaques par injection de fautes sont des techniques extrêmement puissantes pour extraire des secrets d'un circuit intégré. Les toutes premières contre-mesures, développées il y a une vingtaine d’années, ont posé les bases des stratégies de protection. Ce séminaire débutera par leur présentation, et nous vous en proposerons une classification. Une comparaison du coût de chaque contre-mesure sera également réalisée, ainsi qu'une analyse de leur niveau de sécurité, vis-à-vis des menaces existantes lors de leur publication, mais également vis-à-vis d'autres menaces plus récentes.

Ces dernières années, les perturbations électromagnétiques intentionnelles ont suscité un grand intérêt comme moyen d'injection de fautes, à la fois pour des aspects pratiques mais surtout pour leur potentiel à contourner certaines stratégies de protection. Il en découle un besoin de comprendre finement l'impact de telles injections au sein des circuits intégrés. Cependant, les méthodes de caractérisation et de modélisation de l'état de l'art se sont avérées incomplètes, et nous vous exposerons dans la seconde partie de ce séminaire les améliorations que nous y avons apportées, et les résultats ainsi obtenus.

Title: Performance Evaluation of NoCs Using Network Calculus

Keywords: NoCs, Wormhole routing, backpressure, timing analysis, delay bounds

Summary:

Conducting worst-case timing analyses for wormhole Networks-on-chip (NoCs) is a fundamental

aspect to guarantee real-time requirements, but it is known to be a challenging issue due

to complex congestion patterns that can occur. In that respect, we have introduced a new

buffer-aware timing analysis of wormhole NoCs based on Network Calculus. Our main idea

consists in considering the flows serialization phenomena along the path of a flow of

interest (foi), by paying the bursts of interfering flows only at the first convergence

point, and refining the interference patterns for the foi accounting for the limited

buffer size. Moreover, we aim to handle such an issue for a large panel of wormhole NoCs.

The derived delay bounds are analyzed and compared to available results of existing

approaches, based on Scheduling Theory as well as Compositional Performance Analysis

(CPA). In doing this, we have highlighted a noticeable enhancement of the delay bounds

tightness in comparison to CPA approach, and the inherent safe bounds of our proposal

in comparison to Scheduling Theory approaches. Finally, we perform experiments on a

manycore platform, to confront our timing analysis predictions to experimental data

and assess its tightness.

**in A301 at 1.30pm **

**Abstract : **

Physically unclonable functions (PUF) are used in various applications

requiring robust authentication. These systems exploit unpredictable

process variations in electronic circuits. These process variations

uniquely identify the produced hardware, which exhibit distinct

properties in terms, for example, of delay propagations inside the

circuit. By measuring and exploiting these properties, one can determine

a "fingerprint" of the circuit, which can not be physically replicated.

This fingerprint can then be used, for instance, to produce a

cryptographic key. The advantage is that this key does not need to be

explicitly stored, which reduces the security risk. Other applications

include challenge-response protocols, where the responses are determined

from the physical properties of the circuit.

For a given type of PUF, the Loop-PUF, these delay propagation

differences can be modeled by n Gaussian random variables. A challenge

corresponds to a vector of +/- 1 values, and an identifier bit is the

sign of the signed sum of the Gaussian realizations, with signs

corresponding to those of the challenge vector. We try to adress the

following question: what is the joint entropy of these sign bits ?

The exact calculation of the maximum entropy, when considering the set

of all possible challenges, can be carried out only for very small

values of n. We provide a combinatorial extension that provides the

exact values for n = 3 and 4. For n greater or equal to 5, the method

soon becomes intractable and one has recourse to numerical computations.

The value of the maximum entropy can be estimated reliably by defining

equivalence classes of challenges corresponding to the same value of

joint probabilities. This method was found to be numerically tractable

for values of n up to 7. Asymptotic expressions for the max-entropy are

found using the theory of threshold boolean functions.

Laboratoire de Physique des Lasers, Université Paris 13

**Amphi JADE, 14H, Télécom ParisTech, 46 rue barrault, Paris 13**

**""Métrologie des fréquences optiques et peignes de fréquences compacts""**

Les peignes de fréquences compacts sont des outils métrologiques ayant vocation à être intégrés dans des systèmes embarqués dédiés entre autres à la génération d’horloges pour des applications spatiales, à la spectroscopie de précision et aux tests fondamentaux nécessitant des références de fréquences, à la génération d’ondes optiques/micrométriques/Terahertz ultra-stables pour des applications en télécommunications (radar, communication sans fil, communications optiques cohérentes, etc.).

Lors du séminaire je présenterai dans un premier temps les grands principes de la métrologie Temps-Fréquences : les outils expérimentaux (référence, cavité de transfert) et les outils mathématiques (variance d’Allan) nécessaire à la caractérisation métrologique des oscillateurs optiques, et ce, au travers de deux exemples de techniques de stabilisation en fréquence d’un laser : l’une sur une cavité ultra-stable (Pound-Drever-Hall) et l’autre sur une transition moléculaire par la technique d’absorption saturée en cellule.

Dans un second temps, je présenterai mon activité de recherche sur les peignes de fréquences compacts et les résultats de stabilisation et de référencement de lasers à semi-conducteurs à blocage de modes optiques passifs, sur une cavité de transfert à fibre référencée sur une transition d’Acétylène détectée en absorption saturée. Les avantages d’une stabilisation mixte conjuguant stabilisation optoélectronique et affinement du peigne par injection optique seront discutés.

**at 11am in F900,**

Abstract:

We consider transmission over a cloud radio access network (CRAN) focusing

on the framework of oblivious processing at the relay nodes (radio units),

i.e., the relays are not cognizant of the users' codebooks.

This approach is motivated by future wireless communications

(5G and beyond) and the theoretical results connect to a variety

of different information theoretic models and problems.

First it is shown that relaying a-la Cover-El Gamal, i.e.,

compress-and-forward with joint decompression and

decoding, which reflects 'noisy network coding,' is optimal.

The penalty of obliviousness is also demonstrated to be

at most a constant gap, when compared to cut-set bounds.

Naturally, due to the oblivious (nomadic) constraint the CRAN problem

intimately connects to Chief Executive Officer (CEO) source(s) coding

under a logarithmic loss distortion measure.

Furthermore, we identify and elaborate on some interesting

connections with the distributed information bottleneck model for which we

characterize optimal tradeoffs between rates (i.e., complexity) and

information (i.e., accuracy) in the discrete and vector Gaussian frameworks.

Further connections to 'information combining' and 'common reconstruction'

are also pointed out. In the concluding outlook, some interesting problems

are mentioned such as the characterization of the optimal input distributions

under users' power limitations and rate-constrained compression at the

relay nodes,

---------------------------------------------------------------------------

Joint work with: I.E. Aguerri (Paris Research Center, Huawei France)

A. Zaidi (Universite Paris-Est, Paris) and G. Caire (USC-LA and TUB, Berlin)

The research is supported by the European Union's Horizon 2020 Research And

Innovation Programme: no. 694630.

tomb (Linux only) is a cryptographic utility based on the Linux Unified Key Setup (LUKS)

standard and the disk encryption subsystem of Linux kernel (dm-crypt device mapper).

It can be used to encrypt directories, turning them into binary files.

It is very well designed and easy to use. pass is a password manager with many interesting

characteristics. It is open source, does not rely on third party (more or less trustable)

servers, integrated with git and tomb, supported by Firefox / Chrome... extensions, etc.

This seminar will briefly present these two tools and how they can be jointly used for

the highest security of your personal data. Then, we will demonstrate them by installing

all needed components, generating the keys and use them for real use cases.

TL2E - Sorbonne Université / Laboratoire d'Electronique & Electromagnétisme (L2E)

**Amphi B310, 14H, Télécom ParisTech, 46 rue barrault, Paris 13**

**"Spatial Data Focusing: an alternative to Beamforming for geocasting scenarios"**

The capability of an antenna to focus radiated signals into a well defined direction is fundamentally limited by its size (the smaller, the less directive), as the result of diffraction or, equivalently, owing to the properties of the Fourier transform. This applies to single antennas as well as to arrays of multiple antennas.

In this seminar, a Spatial Data Focusing technique is introduced as an alternative scenario to overcome antenna array's beamwidth limitations due to the finite aperture size. The proposed approach aims to focus the transmitted data rather than the transmitted power. This scheme enables wireless broadcast of information to specific spatial locations, using fewer antenna elements compared to classical beamforming techniques. Different configurations will be discussed to implement this scheme and it will be shown that focusing the data is spatially more selective than focusing the power.