Archives des séminaires

Le 03/10/2019 [ComNum] Joint State Sensing and Communications: Theory and Vehicular Applications

Auteur(s) & Affilliation(s) du séminaire :

Prof. Mari Kobayashi (TUM & CentraleSupelec)

Présentation du séminaire :

in A301 at 10am

Abstract:

We consider a communication setup where transmitters wish to simultaneously sense network states and convey messages to intended receivers. The scenario is motivated by joint radar and vehicular communications where the radar and data applications share the same bandwidth.  First, I present a theoretical framework to characterize the fundamental limits of such a setup for memoryless channels with i.i.d. state sequences. Then, I present our recent work on joint radar and communication using Orthogonal Time Frequency Space (OTFS).  Although restricted to a simplified scenario with a single target, our numerical examples demonstrated that two modulations provide as accurate radar estimation as Frequency Modulated Continuous Waveform (FMCW), a typical automotive radar waveform,  while providing a non-negligible communication rate for free.

 

 

Contact(s) :

Le 23/09/2019 [ComNum] New Paradigms for 6G Wireless Communications

Auteur(s) & Affilliation(s) du séminaire :

Andrea Goldsmith

Présentation du séminaire :

at 10.15apm in Amphi Grenat

Abstract:  

Wireless technology has enormous potential to change the way we live, work, and play over the next several decades. Future wireless networks will support 100 Gbps communication between people, devices, and the “Internet of Things,” with high reliability and uniform coverage indoors and out. New architectures including edge computing will drastically enhance efficient resource allocation while also reducing latency for real-time control. The shortage of spectrum will be alleviated by advances in massive MIMO and mmW technology, and breakthrough energy-efficiency architectures, algorithms and hardware will allow wireless networks to be powered by tiny batteries, energy-harvesting, or over-the-air power transfer. There are many technical challenges that must be overcome in order to make this vision a reality. This talk will describe our recent research addressing some of these challenges, including new modulation and detection techniques robust to rapidly time-varying channels, blind MIMO decoding strategies, machine learning equalization and source-channel coding, as well as “fog”-optimization of resource allocation in cellular systems.

Biography:

Andrea Goldsmith is the Stephen Harris professor in the School of Engineering and a professor of Electrical Engineering at Stanford University. She co-founded and served as Chief Technical Officer of Plume WiFi  and of Quantenna (QTNA), and she currently serves on the Board of Directors for Crown Castle (CCI) and Medtronic (MDT). She has also held industry positions at Maxim Technologies, Memorylink Corporation, and AT&T Bell Laboratories. Dr. Goldsmith is a member of the National Academy of Engineering and the American Academy of Arts and Sciences, a Fellow of the IEEE and of Stanford, and has received several awards for her work, including the IEEE Sumner Award, the ACM Athena Lecturer Award, the IEEE Comsoc Edwin H. Armstrong Achievement Award, the National Academy of Engineering Gilbreth Lecture Award, the Women in Communications Engineering Mentoring Award, and the Silicon Valley/San Jose Business Journal’s Women of Influence Award. She is author of the book ``Wireless Communications'' and co-author of the books ``MIMO Wireless Communications'' and “Principles of Cognitive Radio,” all published by Cambridge University Press, as well as an inventor on 29 patents. Her research interests are in information theory and communication theory, and their application to wireless communications and related fields. She received the B.S., M.S. and Ph.D. degrees in Electrical Engineering from U.C. Berkeley.

Contact(s) :

Wigger Michele

Le 17/09/2019 6Gwireless: Wireless Networks Empowered by Reconfigurable Intelligent Surfaces

Auteur(s) & Affilliation(s) du séminaire :

Marco Di Renzo

L2S, CentraleSupelec

Présentation du séminaire :

Mardi 17 septembre, 14H, Télécom Paris, Amphi B312, 46 rue Barrault, Paris 13

 Future wireless networks are expected be more than allowing people, mobile devices, and objects to communicate with each other. Future wireless networks will constitute a distributed intelligent communications, sensing, and computing platform. Small cells, Massive MIMO, millimeter-wave communications are three fundamental approaches to meet the requirements of 5G wireless networks. Their advantages are undeniable. The question is, however, whether these technologies will be sufficient to meet the requirements of future wireless networks that integrate communications, sensing, and computing in a single platform. Wireless networks, in addition, are rapidly evolving towards a softwaredefined design paradigm, where every part of the network can be configured and controlled via software. In this optimization process, however, the wireless environment remains an uncontrollable factor: It remains unaware of the communication process undergoing within it. Apart from being uncontrollable, the environment has a negative effect on the communication efficiency: signal attenuation limits the network connectivity, multi-path propagation results in fading phenomena, reflections and refractions from objects are a source of uncontrollable interference. In the recent period, a brand-new technology, which is referred to as Reconfigurable Intelligent Surfaces (RISs), was brought to the attention of the wireless community. The wireless future that can be envisioned by using this technology consists of coating every environmental object with man-made reconfigurable surfaces of electromagnetic material (software-defined reconfigurable metasurfaces) that are electronically controlled with integrated electronics and wireless communications. In contrast to any other technology currently being used in wireless networks, the distinctive characteristic of the RISs consists of making the environment fully controllable by the telecommunication operators, by allowing them to shape and control the electromagnetic response of the objects distributed throughout the network. The RISs are a promising but little understood technology that has the potential of fundamentally changing how wireless networks are designed today. In this talk, we will discuss the potential of RISs in 6G wireless networks.

 

Contact(s) :

Sibille Alain

Document(s) :

Le 12/09/2019 Passive UHF RFID localisation

Auteur(s) & Affilliation(s) du séminaire :

Christophe LOUSSERT

MOJIX, France

Présentation du séminaire :

assive UHF RFID localisation : state of the art and needed technology breakthrough

Jeudidi 12 septembre, 14H, Télécom Paris, Amphi SAPHIR, 46 rue Barrault, Paris 13

Passive UHF RFID tags have reached a very low cost level <0.05 € and start to be massively deployed in various sectors of activity (20 billions units sold in 2018). The readers able to acquire the data contained in the tags are mainly hand held devices, with operators scanning the products at a distance of less than 0.5 m, but also fixed devices installed at doorways, which dynamically read tags happening to pass in front of them.

In the future it will be extremely useful, and is one of the big challenges, to achieve Real Time Location System (RTLS) in order to carry out the automatic inventory of static tags distributed on a large area (i.e. 100 m²), typically a stockroom with hundreds of meters of shelves containing RFID tagged products. To that aim, an innovative RTLS will be presented in this talk, based on a bistatic system with:
- one single central Rx point, called HotSpot (HS), able to read all tags which have been powered up
- hundreds of distributed wireless transmitters, called Power Nodes (PN), placed inside the shelves and transmitting the maximum allowed power in order to provide sufficient energy to the surrounding tags
- tags operating in backscattering mode toward the HS, when they are powered up

The HS+PN combination enables successful reads in excess of 99.9% (i.e. a handful of no reads, out of a typical stock of 5000 items) and a localisation accuracy better than 0.5 m for 90% of the tags. The principle allowing to achieve these results is, for a given tag that in practice has been powered up by typically 10 PN transmitters, to localise it onto the one generating the largest received signal at the HS. However the accuracy is less than 1m for 10% of the tags, the reason being still under investigation although it seems to involve the propagation channel between the PN transmitter and the tag. One promising direction for improvement is machine learning, since a large amount of training data are available, thye system typically generating daily 1million tag data. Indeed it turns out that the well known KNN algorithm (K Nearest Neighbors), a "lazy" machine learning process, already delivers good results in some RFID applications.

 

Contact(s) :

Sibille Alain

Document(s) :

Le 09/07/2019 "Random Coupling Model for wireless communication systems"

Auteur(s) & Affilliation(s) du séminaire :

Gabriele Gradoni

School of Mathematical Sciences & George Green Institute for Electromagnetics Research

University of Nottingham

United Kingdom

Présentation du séminaire :

Mardi 9 juillet, 11H, Télécom Paris, Amphi JADE, 46 rue Barrault, Paris 13

Based on wave chaos theory, statistical methods have been successfully developed and used to devise simulation tools that characterise the electromagnetic energy flow through large structures with a reasonable computational effort. A specific model, the random coupling model, which describes the high-frequency excitation of irregular environments, is derived and applied to cavity problems of practical interest in wireless systems.  Results are relevant in wireless channels in telecommunications, wavefront shaping in imaging and radars, reverberation chambers in electromagnetic compatibility, and microwave applicators in material processing engineering.

 

Contact(s) :

Sibille Alain

Document(s) :

Le 05/07/2019 Séminaire général Comelec: "Cyber Attacks on Internet of Things Sensor Systems for Inference"

Auteur(s) & Affilliation(s) du séminaire :

Rick S. Blum

IEEE Fellow, IEEE Signal Processing Society Distinguish Lecturer,

Robert W. Wieseman Endowed Professor of Electrical Engineering

Electrical and Computer Engineering Dept., Lehigh University

Présentation du séminaire :

Vendredi 5 juillet, 14H, Télécom Paris, Amphi B310, 46 rue Barrault, Paris 13

Rick S. Blum
IEEE Fellow, IEEE Signal Processing Society Distinguish Lecturer,
Robert W. Wieseman Endowed Professor of Electrical Engineering
Electrical and Computer Engineering Dept., Lehigh University


The Internet of Things (IoT) improves pervasive sensing and control capabilities via the aid of modern digitial communication, signal processing and massive deployment of sensors. The employment of low-cost and spatially distributed IoT sensor nodes with limited hardware and battery power, along with the low required latency to avoid unstable control loops, presents severe security challenges.  Attackers can modify the data entering or communicated from the IoT sensors which can have serious impact on any algorithm using this data for inference.  In this talk we describe how to provide tight bounds (with sufficient data) on the performance of the best algorithms trying to estimate a parameter from the attacked data and communications under any assumed statistical model describing how the sensor data depends on the parameter before attack.  The results hold regardless of the estimation algorithm adopted which could employ deep learning, machine learning, statistical signal processing or any other approach.  Example algorithms that achieve performance close to these bounds are illustrated.  Attacks that make the attacked data useless for reducing these bounds are also described.  These attacks provide a guaranteed attack performance in terms of the bounds regardless of the algorithms the estimation system employs.  References are supplied which provide various extensions to all the specific results presented and a brief discussion of applications to IEEE 1588 for clock synchronization is provided.

Rick S. Blum received a B.S.E.E from Penn State in 1984 and an M.S./Ph.D in EE from the University of Pennsylvania in 1987/1991. From 1984 to 1991 he was with GE Aerospace. Since 1991, he has been at Lehigh University. His research interests include signal processing for smart grid, communications, sensor networking, radar and sensor processing. He was an AE for IEEE Trans. on Signal Processing and for IEEE Communications Letters. He has edited special issues for IEEE Trans. on Signal Processing, IEEE Journal of Selected Topics in Signal Processing and IEEE Journal on Selected Areas in Communications. He was a member of the SAM Technical Committee (TC) of the IEEE Signal Processing Society. He was a member of the Signal Processing for Communications TC of the IEEE Signal Processing Society and is a member of the Communications Theory TC of the IEEE Communication Society. He was on the awards Committee of the IEEE Communication Society. Dr. Blum is a Fellow of the IEEE, an IEEE Signal Processing Society Distinguished Lecturer (twice), an IEEE Third Millennium Medal winner, a member of Eta Kappa Nu and Sigma Xi, and holds several patents. He was awarded an ONR Young Investigator Award and an NSF Research Initiation Award.


 

Contact(s) :

Wigger MicheleTchamkerten Aslan

Le 05/07/2019 [ComNum seminar] Caching-Aided Communication for Relay Networks with Cooperation

Auteur(s) & Affilliation(s) du séminaire :

Prof. Youlong Wu, Shanghai Tech University

Présentation du séminaire :

in A301 at 10am

Abstract

The data traffic has grown dramatically over the past few years mainly due to video streaming. By utilizing the storage capabilities of the network nodes, caching can greatly improve the spectral efficiency and reduce the transmission latency (delay). In this talk, we develop and analyze caching schemes for two types of nontrivial networks: two-layer relay network and server-aided D2D network. For both networks, our schemes are order optimal and can further reduce the transmission delay compared to the previously known caching schemes. Moreover, for the two-layer 2-relay network, we surprisingly find that if each relay’s caching size equals to 38.2% of full library’s size, increasing the relay’s caching memory CANNOT reduce the transmission delay. For the server-aided D2D network, our scheme achieves a cooperation gain and aparallel gain by fully exploiting user cooperation and optimally allocating communication loads among the server and users.

Contact(s) :

Le 04/07/2019 [ComNum seminar] Information-theoretic Privacy: Leakage measures, robust privacy guarantees

Auteur(s) & Affilliation(s) du séminaire :

Prof. Lalitha Sankar, Arizona State University

Présentation du séminaire :

in A301 at 1.15pm



Abstract

Privacy is the problem of ensuring limited leakage of information about sensitive features while

sharing information (utility) about non-private features to legitimate data users. Even as

differential privacy has emerged as a strong desideratum for privacy, there is also an equally strong

need for context-aware utility-guaranteeing approaches in many data sharing settings. This talk

approaches this dual requirement using an information-theoretic approach that includes

operationally motivated leakage measures, design of privacy mechanisms, and verifiable

implementations using generative adversarial models. Specifically, we introduce maximal alpha

leakage as a new class of adversarially motivated tunable leakage measures based on accurately

guessing an arbitrary function of a dataset conditioned on a released dataset. The choice of alpha

determines the specific adversarial action ranging from refining a belief for alpha = 1 to guessing

the best posterior for alpha = ∞, and for these extremal values this measure simplifies to mutual

information (MI) and maximal leakage (MaxL), respectively. The problem of guaranteeing

privacy can then be viewed as one of designing a randomizing mechanism that minimizes

(maximal) alpha leakage subject to utility constraints. We then present bounds on the robustness

of privacy guarantees that can be made when designing mechanisms from a finite number of

samples. Finally, we will briefly present a data-driven approach, generative adversarial privacy

(GAP), to design privacy mechanisms using neural networks. GAP is modeled as a constrained

minimax game between a privatizer (intent on publishing a utility-guaranteeing learning

representation that limits leakage of the sensitive features) and an adversary (intent on learning the

sensitive features). Time permitting, we will briefly discuss the learning-theoretic underpinnings

of GAP as well as connections to the problem of algorithmic fairness.

This work is a result of multiple collaborations: (a) maximal alpha leakage with J. Liao (ASU),

O. Kosut (ASU), and F. P. Calmon (Harvard); (b) robust mechanism design with M. Diaz

(ASU), H. Wang (Harvard), and F. P. Calmon (Harvard); and (c) GAP with C. Huang (ASU), P.

Kairouz (Google), X. Chen (Stanford), and R. Rajagopal (Stanford).

Contact(s) :

Wigger Michele

Le 04/07/2019 [ComNum seminar] On the Superiority of Equiangular Tight Frames

Auteur(s) & Affilliation(s) du séminaire :

Prof. Ram Zamir, Tel Aviv University

Présentation du séminaire :

in A301 at 2pm

Abstract:
Over-complete bases (frames) play the role of analog codes for compression (compressed sensing) and redundant signaling (erasure correction).  The eigenvalue distribution of a random subset of the frame vectors determines the code performance with noisy observations.  For the highly symmetric class of equiangular tight frames (ETF), we show that if we keep the frame aspect ratio and the selection probability of a frame vector fixed, then the eigenvalue distribution converges asymptotically to the MANOVA distribution. We also show that the MANOVA distribution gives a lower bound on the random-subset d'th moment, for d=1,2,3 and 4, for all frames of a fixed aspect ratio. We conjecture that MANOVA also gives an asymptotic lower bound for the "inverse moment" (d = -1), i.e., for the noise enhancement under subset inversion. Thus, ETF is the most robust analog code for noisy compressed sensing and redundant signaling. Joint work with Marina Haikin and Matan Gavish.

Bio:
Ram Zamir was born in Ramat-Gan, Israel in 1961. He received the B.Sc., M.Sc. (summa cum laude) and D.Sc. (with distinction) degrees from Tel-Aviv University, Israel, in 1983, 1991, and 1994, respectively, all in electrical engineering. In the years 1994 - 1996 he spent a post-doctoral period at Cornell University, Ithaca, NY, and at the University of California, Santa Barbara.  In 2002 he spent a Sabbatical year at MIT, and in 2008 and 2009 short Sabbaticals at ETH and MIT. Since 1996 he has been with the department of Elect. Eng. - Systems at Tel Aviv University.  His research interests include information theory (in particular: lattice codes for multi-terminal problems), source coding, communications and statistical signal processing.  His book "Lattice coding for signals and networks" was published in 2014.

Contact(s) :

Rioul Olivier

Le 04/07/2019 A language-based approach for combining heterogeneous models

Auteur(s) & Affilliation(s) du séminaire :

Vincent Zhao

Présentation du séminaire :

The design of Cyber-Physical systems (CPS) demands to combine discrete models of pieces of software (cyber) components with continuous models of physical components. Such heterogeneous systems rely on numerous domains with competencies and expertise that go far beyond traditional software engineering: systems engineering. 

We explore a model-based approach to systems engineering that advocates the composition of several heterogeneous artifacts (called views) into a sound and consistent system model. Rather than trying to build the universal language able to capture all aspects of systems, we rather propose to bring together small subsets of languages to focus on specific analysis capabilities while keeping a global consistency of all these small pieces of languages. We take as an example, an industrial process based on Capella, which provides (among others) large support for functional analysis from the requirements to the deployment of components. Even though Capella is already quite expressive, it does not provide direct support for non-function analysis such as scheduling, security, etc. However, Rather than trying to extend either Capella or the others into always more expressive languages to add the missing features we extract a pertinent subset of both languages to build a view adequate for conducting further analysis. Our language is generic enough to extract pertinent subsets of languages and combine them to build views for different experts. It also maintains a global consistency between the different views.

Contact(s) :

Vincent Zhao

Le 27/06/2019 [ComNum's PhD seminar] Networks with Mixed-Delay Constraints

Auteur(s) & Affilliation(s) du séminaire :

Nikbakht Homa

Présentation du séminaire :

in A301 at 2pm


Abstract : Wireless communication networks have to accommodate different types of data traffics with different latency constraints. In particular, delay-sensitive video-applications represent an increasing portion of data traffic. On the other hand, modern networks can increase data rates by means of cooperation between terminals or with helper relays. However, cooperation typically introduces additional communication delays, and is thus not applicable to delay-sensitive applications. In this work, we analyze the rates of communication that can be attained over an interference network with either transmitter or receiver cooperation, and where parts of the messages cannot profit from this cooperation because they are subject to stringent delay constraints.

 

 

Contact(s) :

Ciblat Philippe

Le 20/06/2019 [ComNum Seminar] Recent Advances in Deep Learning: Feature Selection, Adversarial Attacks, Applicati

Auteur(s) & Affilliation(s) du séminaire :

Prof. Aly El Gamal

Présentation du séminaire :

in Amphi Opale at 11am

 

Abstract: 

Deep learning is a powerful tool that carries the promise of revolutionizing current computational frameworks. In this talk, I will first present recent work on two problems that lie at the core of making deep learning applicable in a ubiquitous fashion: 1- Selecting a subset of important features that leads to similar or higher learning performance while reducing the training time, as compared to using all available features. Two common categories of feature selection methods are filter methods that rely on intrinsic properties of the dataset, and wrapper methods that rely on evaluating the performance of a machine learning model that is retrained using every candidate combination of features. In general, filter methods are computationally efficient while wrapper methods deliver better learning performance. We propose an efficient deep-learning based wrapper method that exploits the transferability property of deep neural network architectures to avoid excessive retraining, as well as autoencoders to distill important dataset intrinsic properties, 2- Adversarial deep learning attacks that introduce small perturbations to the input data in sensitive directions, which negatively impacts the learning performance. Here, we propose novel strategies to preprocess the input data for defending adversarial attacks. Our strategies rely on properties of denoising autoencoders that are demonstrated to be powerful tools for extracting robust representations. In the second part of the talk, I will present our findings on applying deep learning in wireless communications and networking and discuss why we believe that it’s a perfectly fit tool for various tasks enabling autonomous communication systems and resource management. This discussion will particularly highlight Purdue’s successful journey in the DARPA Spectrum Collaboration Challenge (SC2). Finally, I will present our initiative at Purdue Engineering for machine learning graduate education, and how in general, Purdue is currently providing an ideal environment for graduate studies in engineering. 

Short Bio: 

Aly El Gamal is an Assistant Professor at the Electrical and Computer Engineering Department of Purdue University. He received his Ph.D. degree in Electrical and Computer Engineering and M.S. degree in Mathematics from the University of Illinois at Urbana-Champaign, in 2014 and 2013, respectively. Prior to that, he received the M.S. degree in Electrical Engineering from Nile University and the B.S. degree in Computer Engineering from Cairo University, in 2009 and 2007, respectively. His research interests include information theory and machine learning. 

Dr. El Gamal has received several awards, including the Purdue Seed for Success Award, the Purdue CNSIP Area Seminal Paper Award, the DARPA Spectrum Collaboration Challenge (SC2) Contract Award and Preliminary Events 1 and 2 Team Awards, and the Huawei Innovation Research Program (HIRP) OPEN Award. He is currently leading the Purdue and Texas A&M Team (BAM! Wireless) in DARPA SC2, co-leading the Purdue Engineering initiative for creating a machine learning group and developing graduate courses, and a reviewer for the American Mathematical Society (AMS) Mathematical Reviews.

Contact(s) :

Wigger Michele

Le 06/06/2019 [ComNum seminar] Wireless Networks via the Cloud: An Information Theoretic View

Auteur(s) & Affilliation(s) du séminaire :

Prof. Shlomo Shamai

Présentation du séminaire :


at 11am in Amphi Grenat


Abstract:  


Cloud based wireless networks named also as Cloud Radio Access Networks (C-RANs) emerge as appealing architectures for next-generation wireless/cellular systems whereby the processing/encoding/decoding is migrated from the local base-stations/radio units (RUs) to a control/central units (CU) in the "cloud". The network operates via fronthaul digital links connecting the CU and the RUs (operating as relays). The uplink and downlink are examined from a network information theoretic perspective, with emphasis of simple oblivious processing at the RUs, which is attractive also from the practical point of view. The analytic approach, applied to simple wireless/cellular models illustrates the considerable performance gains associated with advanced network information theoretically inspired techniques, carrying also practical implications. An outlook, pointing out interesting theoretical directions, referring also to Fog radio access networks (F-RAN), concludes the presentation.

Biography:

Professor Shlomo Shamai is a distinguished professor at the Department of Electrical engineering at the Technion ? Israel Institute of Technology. Professor Shamai is an information theorist and winner of the 2011 Shannon Award. Shlomo Shamai received the B.Sc., M.Sc., and Ph.D. degrees in electrical engineering from the Technion, in 1975, 1981 and 1986 respectively. During 1975-1985 he was with the Israeli Communications Research Labs. Since 1986 he is with the Department of Electrical Engineering at the Technion?Israel Institute of Technology, where he is now the William Fondiller Professor of Telecommunications. His research areas cover a wide spectrum of topics in information theory and statistical communications. Prof. Shamai is an IEEE Fellow and a member of the International Union of Radio Science (URSI

Contact(s) :

Le 23/05/2019 [ComNum PhD's seminar] Compression with random access

Auteur(s) & Affilliation(s) du séminaire :

Shashank Vatedka

Présentation du séminaire :

at 1pm in A301


Abstract:

In compressing large files, it is often desirable to be able to efficiently recover and update short fragments of data. Classical compression schemes such as Lempel-Ziv are optimal in terms of compression rate but local recovery of even one bit requires us to decompress the entire codeword. Let us define the local decodability of a compression scheme to be the minimum number of codeword bits that need to be read in order to recover a single bit of the message/raw data. Similarly, let the update efficiency be the minimum number of codeword bits that need to be modified in order to update a single message bit. For a space efficient (entropy-achieving) compression scheme, what is the smallest local decodability and update efficiency that we can achieve? Can we simultaneously achieve small local decodability and update efficiency, or is there a tradeoff between the two? All this, and more (including a new compression algorithm) will be discussed in the talk.

Contact(s) :

Ciblat Philippe

Le 23/05/2019 Séminaire général Comelec: ""Software Compilation Techniques for Multi-Processor Platforms"

Auteur(s) & Affilliation(s) du séminaire :

Andrea ENRICI

Nokia Bell Labs France

Présentation du séminaire :

Jeudi 23 mai, 14H, Télécom ParisTech, Amphi B310, 46 rue Barrault, Paris 13

Thanks to the continuous evolution of semiconductor process technologies, tens or hundreds of processors of different types can nowadays be integrated into single chips. These heterogeneous Multi-Processor Systems-on-Chip (MPSoCs) have emerged in the last two decades as an important class of platforms for networking, communications, signal-processing and multimedia among other applications.

To fully exploit their computational power, new programming tools are needed that can assist engineers in achieving high software productivity. An MPSoC compiler is a complex tool-chain that aims at tackling the problems of application modeling, platform description, software parallelization, software distribution and code generation in a single framework. This seminar discusses various aspects of compilers for heterogeneous MPSoC platforms, using the well-known single-core C compiler technology as a baseline for comparison. The seminar is mainly intended as an educational presentation focused on the most important ingredients of the MPSoC compilation process and open research issues.

Contact(s) :

Apvrille Ludovic

Document(s) :

Le 09/05/2019 [Comnum PhD's seminar] Neural network approaches to point lattice decoding

Auteur(s) & Affilliation(s) du séminaire :

Corlay Vincent

Présentation du séminaire :

at 1pm in A301

Abstract:

Point lattices are the Euclidean counterparts of error correcting codes. One of the most challenging lattice problem is the decoding problem: given a point in the Euclidean space, find the closest lattice point. As it is a classification problem, it seems natural to tackle this problem via neural networks. Indeed, following the deep learning revolution, started in 2012, neural networks have been commonly used for classification tasks in many fields. The work presented in this talk is a first theoretical step towards a better understanding of the neural network architecture that should be more suitable for the decoding problem.

Namely, we first establish a duality between the decoding problem and a piecewise linear function with a number of affine pieces growing exponentially in the space dimension.
We then prove that deep neural networks need a number of neurons growing only polynomially in the space dimension to compute some of these functions, whereas this complexity is exponential for any shallow neural network.
This has two consequences:
1 – In some cases, deep neural networks can be used to implement low complexity decoding algorithms.
2 - Euclidean lattices provide good examples to illustrate the benefit of depth over width in representing functions with neural networks.

Contact(s) :

Ciblat Philippe

Le 18/04/2019 Séminaire général Comelec: "How to efficiently implement Deep Learning in embedded systems ?"

Auteur(s) & Affilliation(s) du séminaire :

Marc Duranton

CEA-LIST, Saclay

Présentation du séminaire :

Jeudi 18 avril, 14H, Télécom ParisTech, Amphi OPALE, 46 rue Barrault, Paris 13

Artificial Intelligence, and more particularly Deep Learning are enablers of new applications, and allow computing systems to better interact with the real world by extracting information from signals, images and sounds. But they are very demanding in computing power and therefore energy, and pose challenges for being used in embedded systems. This presentation will present some uses cases, tools, approaches and hardware allowing to select neural networks and hardware implementations tuned for embedded applications (Deep Learning at the edge)..

Contact(s) :

Danger Jean-luc

Le 21/03/2019 Séminaire général Comelec: "Digital predistortion for wideband 5G transmitters"

Auteur(s) & Affilliation(s) du séminaire :

Pham Germain

Présentation du séminaire :

Salle C48, 14H, Télécom ParisTech, 46 rue barrault, Paris 13

For the last decades, the exchange rates in telecommunication standards have exponentially increased thanks to major innovations at many levels in radio-transceivers. The 5th generation of mobile standards (5G) targets even higher data rates than previous cellular standards. The increase of data rates is made possible by using larger transmission bandwidths (up to several hundred of MHz) and by using modulation schemes with high spectral efficiency such as OFDM-based modulations. One drawback of these waveforms is that they are very sensitive to any non-linearity in the transceiver and require highly linear transmitters to maintain signal quality and spectral purity. In parallel, one of the other major goals of 5G is to reduce energy consumption or maintain equal energy consumption while providing more services.

In radio-transceivers, there is one component for which these two constraints are in complete opposite design directions: the power amplifier (PA). In the last few years, the digital predistortion technique (DPD) has been developed to improve the PA linearity/efficiency trade-off. This technique consists in distorting the signal before amplification with the inverse characteristic of the PA in order to correct the distortions caused by the PA during amplification. Although this technique has proven itself over the past generations of mobile telephony, it reaches its limits for the 5G because of the very wide bandwidths.

In this talk we will review the main recent and promising solutions to implement efficient predistortion techniques for future wideband transceivers. We will cover the most common approaches for PA modeling and for computing predistorter models and discuss the implementation of the predistortion for RF transmitter highlighting the advantages and drawbacks of different approaches. We will elaborate on promising solutions addressing the bandwidth limitations of digital predistortion.

Contact(s) :

Pham Germain

Document(s) :

Le 28/02/2019 Dependable and Scalable FPGA Computing Using HDL-based Checkpointing.

Auteur(s) & Affilliation(s) du séminaire :

Vu Hoang Gia

Présentation du séminaire :

Dependable and Scalable FPGA Computing Using HDL-based Checkpointing. 
Outline: 

  • Motivation 
  • Related work 
  • Main contributions:     

    • Reduced set of state-holding elements     
    • Consistent snapshots in FPGA checkpointing     
    • Checkpointing architecture    
    • Static analysis of HDL source code    
    • Checkpointing flow for the resilience of FPGA computing    
    • Multitasking on FPGA using HDL-based checkpointing    
    • Hardware task migration on heterogeneous FPGA computing using HDL-based checkpointing    
    • Python-based tool for checkpointing insertion. 

  • Conclusion  
For full abstract of the research, please find it in the attachment.  

Contact(s) :

Vu Hoang Gia

Document(s) :

Le 22/02/2019 [ComNum PhD's seminar] Cache Freshness Updates with age-related non-uniform Update Duration.

Auteur(s) & Affilliation(s) du séminaire :

Haoyue TANG

Présentation du séminaire :

at 1.30pm in A301

Abstract

Caching systems have been employed to reduce backhaul traffic and improve data availability. In many of caching applications like sensor networks and satellite broadcasting, data at the remote sources are dynamic and the changes can not be pushed to the cache immediately due to communication resource constraint. When a user requests data from the cache, the data from the cache may be out-dated. To guarantee data freshness from the perspective of the caching users, optimal caching update strategies need to be designed. Previous results reveal that when the update duration for each file is a constant and identical among the files, the optimal inter update interval for each file should be proportional to the square root of its popularity. 


In this talk, we address the following problem: when the update duration for each file is non-uniform and age related, how to design file update strategies such that the overall freshness of the caching systems can be guaranteed? We formulate the problem into a relaxed optimization problem and proposed the corresponding scheduling strategy. We found that when the update duration is dynamic and age-related, the AoI optimal inter update interval may be far from the square root law of previous results.

Contact(s) :

Ciblat Philippe

Le 21/02/2019 Séminaire général Comelec: "5G radio access/backhauling from deterministic channel simulations"

Auteur(s) & Affilliation(s) du séminaire :

Yoann CORRE, SIRADEL, Rennes

Présentation du séminaire :

Amphi OPALE, 14H, Télécom ParisTech, 46 rue barrault, Paris 13

Physical modelling of the wireless propagation channel, based on accurate representation of the environment, plays a major role in the assessment and optimization of new 5G network topologies, including both the radio access and the last-mile xhaul segments. It will be shown in this talk how the physical propagation techniques are today employed in the industry to either simulate the millimeter-wave radio links, design ultra-dense networks, or predict the massive MIMO performance in real deployment scenarios. Complementary to on-field trials, the simulation contributes to the elaboration of the mobile operator’s strategies. New infrastructure investments and new antenna’s installations are decided after determination of the best scenario. Several use cases will be illustrated, such as 5G network densification, millimeter-wave mesh backhauling, and FWA (Fixed Wireless Network) planning. Finally, it will be shortly explained how the ray-tracing technique is extended and now applied for investigation of the sub-THz spectrum (promising 6G wireless candidate).

Contact(s) :

Sibille Alain

Document(s) :

Le 07/02/2019 ROS - Robot Operating System: what it is, how to use it, how to bring it up in a System-on-Chip cont

Auteur(s) & Affilliation(s) du séminaire :

Bertolino Matteo

Présentation du séminaire :

Robot Operating System (ROS) is a well know communication layer for writing robot software. 
Its goal is simplifying the hard task to create truly robust, general-purpose and collaborative
robot software. Collaborative means that ROS was designed specifically for letting different
groups, each one expert in a specific field, to collaborate and build upon each other's work,
using a common convention.
In this seminar we will firstly explore the ROS world and how to use it for practical projects.
Then we will have a look to the most used tools that allows the simulation and the verification
of various rover actors, such as sensors, actuators, mechanic components, etc. Modeling (of both
rover's components and the external environment) is strongly used in this context as well as
simulation. In the LabSoc domain, we will see that the described framework can be well binded
with TTool.

Contact(s) :

Le 01/02/2019 [ComNum PhD's seminar] A Fundamental Storage-Communication Tradeoff in Distributed Computing with St

Auteur(s) & Affilliation(s) du séminaire :

Yan Qifa

Présentation du séminaire :

at 1pm in A301

Abstract:

Distributed computing has emerged as one of the most important paradigms to speed up large-scale data analysis tasks such as machine learning. A well-known computing framework, MapReduce, can deal with tasks with large data size. In such systems, the tasks are typically decomposed into computing map and reduce functions, where the map functions can be computed by different nodes across the network, and the final outputs are computed by combining the outputs of the map functions with reduce functions. In such systems, one important problem is when there are straggling nodes, who computes too slow, or failed to work due to some reason, how to conduct computations.  

      In this talk, the optimal storage-computation tradeoff is characterized for a MapReduce-like distributed computing system with straggling nodes, where only a part of the nodes can be utilized to compute the desired output functions. The result holds for arbitrary output functions and thus generalizes previous results that restricted to linear functions. Specifically, in this work, we propose a new information-theoretical converse and a new matching coded computing scheme, that we call coded computing for straggling systems (CCS).

 

 

Contact(s) :

Ciblat Philippe

Le 31/01/2019 Séminaire général Comelec: "An information theoretic perspective on web privacy"

Auteur(s) & Affilliation(s) du séminaire :

Elza ERKIP (New York University Tandon School of Engineering)

Présentation du séminaire :

Amphi OPALE, 14H, Télécom ParisTech, 46 rue barrault, Paris 13

When we browse the internet, we expect that our social network identities and web activities will remain private. Unfortunately, in reality, users are constantly tracked on the internet. As web tracking technologies become more sophisticated and pervasive, there is a critical need to understand and quantify web users' privacy risk. In other words, what is the likelihood that users on the internet can be uniquely identified from their online activities?

This talk provides an information theoretic perspective on web privacy by considering two main classes of privacy attacks based on the information they extract about a user. (i) Attributes capture the user's activities on the web and could include its browsing history or its memberships in groups. Attacks that exploit the attributes are called “fingerprinting attacks,” and usually include an active query stage by the attacker. (ii) Relationships capture the user's interactions with other users on the web such as its friendship relations on a certain social network. Attacks that exploit the relationships are called “social network de-anonymization attacks.” For each class, we show how information theoretic tools can be used to design and analyze privacy attacks and to provide explicit characterization of the associated privacy risks.

Contact(s) :

Wigger Michele

Le 24/01/2019 [ComNum PhD's seminar] About the Entropic uncertainty principle

Auteur(s) & Affilliation(s) du séminaire :

Asgari Fatemeh

Présentation du séminaire :

in A301 at 1pm

Abstract:

The entropy power inequality (EPI), first introduced by Shannon (1948), states that the entropy power of sum of two independent random variables X and Y is not less than the sum of the entropy powers of X and Y.  It finds many applications and has received several proofs and generalizations, in particular for dependent variables X and Y. We propose new conditions under which the EPI holds for dependent summands and discuss the implications of this result. This is a joint work with Mohammad Hossein Alamatsaz.

The well-known uncertainty principle used in physics in based on Kennard-Weyl's inequality (1928) and was strengthened in terms of Shannon's entropy, leading to the entropic uncertainty principle (EUP). The EUP was conjectured by Hirschman (1957) and finally proved by Beckner (1975) based on Babenko's inequality with optimal constants. Beckner's proof of Babenko's inequality is extremely difficult and the resulting derivation of the EUP is indirect (via Renyi entropies). A simple proof was recently published in Annals of Physics (2015) which turns out to be very questionable. We give a simple proof of a weaker, "local" EUP using the Hermite decomposition. This is a joint work with Olivier Rioul.

Contact(s) :

Ciblat Philippe

Le 11/01/2019 Séminaire général Comelec: "Turning elastic metro optical networks into reality"

Auteur(s) & Affilliation(s) du séminaire :

Patricia LAYEC

NOKIA Bell Labs

Présentation du séminaire :

Amphi OPALE, 14H30, Télécom ParisTech, 46 rue barrault, Paris 13

Located at the meeting point between telecom operators and over-the-top service providers, metro networks are particularly well-suited for the introduction of radical acceleration of dynamics in the optical networks, leveraging elastic building blocks such as transponders and optical nodes. In this talk, we review innovative solutions which could be used to address some of the challenges of metro networks in the short-medium term (e.g. 2-5 years from now). In particular, we discuss how to mitigate filter impairments thanks to monitoring. We then highlight how machine learning could automate optical networks.

Contact(s) :

Grillot Frédéric

Document(s) :

Le 10/01/2019 Odyn: Deadlock Prevention and Hybrid Scheduling Algorithm for Real-Time Dataflow Applications

Auteur(s) & Affilliation(s) du séminaire :

Dauphin Benjamin

Présentation du séminaire :

Summary:
Recent wireless communication standards (4G, 5G) need dynamic adjustments of transmission parameters (e.g., modulation, bandwidth), making traditional static scheduling approaches less and less efficient.

To schedule these applications we designed Odyn, a hybrid approach for the scheduling and memory management of periodic dataflow applications on parallel, heterogeneous, Non-Uniform Memory Architecture (NUMA) platforms.

In Odyn, the ordering of tasks and memory allocation are distributed and computed simultaneously at run-time for each Processing Element. We also propose a mechanism to prevent deadlocks caused by attempts to allocate
buffers in size-limited memories.

Contact(s) :

Dauphin Benjamin

Le 14/12/2018 Energy Efficient System Architecture for Devices in Artificial Intelligence-of-Things

Auteur(s) & Affilliation(s) du séminaire :

Prof. Yong Lian

Fellow of Academy of Engineering Singapore, Fellow of IEEE, President, IEEE Circuits and Systems Society

Présentation du séminaire :

Séminaire général Comelec
"Energy Efficient System Architecture for Devices in Artificial Intelligence-of-Things"
Amphi THEVENIN, 14H, Télécom ParisTech, 46 rue barrault, Paris 13

Abstract
Internet-of-Things (IoT) is the inter-networking of physical devices, vehicles, buildings, and objects with embedded sensors. It is estimated that by 2020 there will be more than 34 billion IoT devices connected to the Internet. Nearly $6 trillion will be spent on IoT solutions over the next five years. Artificial Intelligence (AI), on the other hand, is intelligence demonstrated by machines that work and react like humans. The combination of AI and IoT gives birth of Artificial Intelligence-of-Things (AIoT). AIoT devices differ from IoT devices that not only they sense, store, transmit data but also analyze and act on data, i.e. the AIoT device makes a decision or perform a task similar to what a person could do. The enabling technology for the AIoT device is embedded AI. This talk will cover the energy efficient system architecture that utilizes the event-driven signal representation. The event-driven signal representation enables data compression at the input source, which greatly reduces the power for data transmission and processing. We will show by examples that the event-driven system significantly improves energy efficiency and is well suited for AIoT applications.

Biography:
Dr. Yong Lian received the B.Sc degree from the College of Economics & Management of Shanghai Jiao Tong University in 1984 and the Ph.D degree from the Department of Electrical Engineering of National University of Singapore (NUS) in 1994. His research interests include low power techniques, continuous-time signal processing, biomedical circuits and systems, and computationally efficient signal processing algorithms. His research has been recognized with more than 20 awards including the 1996 IEEE Circuits and Systems Society's Guillemin-Cauer Award, the 2008 Multimedia Communications Best Paper Award from the IEEE Communications Society, 2011 IES Prestigious Engineering Achievement Award, 2013 Outstanding Contribution Award from Hua Yuan Association and Tan Kah Kee International Society, and the 2015 Design Contest Award in 20th International Symposium on Low Power Electronics and Design. He is also the recipient of the National University of Singapore Annual Teaching Excellence Awards in 2009 and 2010, resectively.

Dr. Lian is the President of the IEEE Circuits and Systems (CAS) Society, a member of IEEE Fellow Committee, a member of IEEE Biomedical Engineering Award Committee, a member of Steering Committee of the IEEE TBioCAS. He was the Editor-in-Chief of the IEEE TCAS-II from 2010 to 2013,  Vice President for Publications of CASS, Vice President for Asia Pacific Region, Chair of BioCAS and DSP TC, founder of several conferences including BioCAS, ICGCS, and PrimeAsia.

Contact(s) :

Desgreys Patricia

Le 06/12/2018 Screaming Channels: When Electromagnetic Side Channels Meet Radio Transceivers

Auteur(s) & Affilliation(s) du séminaire :

Giovanni Camurati

Présentation du séminaire :

Screaming Channels: When Electromagnetic Side Channels Meet Radio Transceivers From how far 
can we mount electromagnetic side channel attacks? Are we limited to close physical proximity 
or can we go much further? During this seminar we will see how mixed signal chips with radios 
(e.g., BLE) unintentionally transmit side channel information together with the intended radio 
signals, at a considerable distance. We will show how we have discovered this type of leak and 
how we can exploit it to break AES at several meters (we have a proof of concept at 10 m in an 
anechoic room). 
The session will be interactive, with some demos.

Contact(s) :

Apvrille Ludovic

Le 15/11/2018 Séminaire général Comelec: "Attaques par injection de fautes et contre-mesures"

Auteur(s) & Affilliation(s) du séminaire :

Sauvage Laurent

LTCI / Comelec / SSH

Présentation du séminaire :

Amphi OPALE, 14H, Télécom ParisTech, 46 rue barrault, Paris 13

"Attaques par injection de fautes et contre-mesures : passé, présent, futur"

Les attaques par injection de fautes sont des techniques extrêmement puissantes pour extraire des secrets d'un circuit intégré. Les toutes premières contre-mesures, développées il y a une vingtaine d’années, ont posé les bases des stratégies de protection. Ce séminaire débutera par leur présentation, et nous vous en proposerons une classification. Une comparaison du coût de chaque contre-mesure sera également réalisée, ainsi qu'une analyse de leur niveau de sécurité, vis-à-vis des menaces existantes lors de leur publication, mais également vis-à-vis d'autres menaces plus récentes.

Ces dernières années, les perturbations électromagnétiques intentionnelles ont suscité un grand intérêt comme moyen d'injection de fautes, à la fois pour des aspects pratiques mais surtout pour leur potentiel à contourner certaines stratégies de protection. Il en découle un besoin de comprendre finement l'impact de telles injections au sein des circuits intégrés. Cependant, les méthodes de caractérisation et de modélisation de l'état de l'art se sont avérées incomplètes, et nous vous exposerons dans la seconde partie de ce séminaire les améliorations que nous y avons apportées, et les résultats ainsi obtenus.

Contact(s) :

Sauvage Laurent

Le 08/11/2018 Performance Evaluation of NoCs Using Network Calculus

Auteur(s) & Affilliation(s) du séminaire :

Prof. Ahlem Mifdaoui (DISC Department, University of Toulouse/ ISAE-Supaéro)

Présentation du séminaire :

 

                                   Title: Performance Evaluation of NoCs Using Network Calculus

Keywords: NoCs, Wormhole routing, backpressure, timing analysis, delay bounds 
 
Summary: 
Conducting worst-case timing analyses for wormhole Networks-on-chip (NoCs) is a fundamental 
aspect to guarantee real-time requirements, but it is known to be a challenging issue due 
to complex congestion patterns that can occur. In that respect, we have introduced a new 
buffer-aware timing analysis of wormhole NoCs based on Network Calculus. Our main idea 
consists in considering the flows serialization phenomena along the path of a flow of 
interest (foi), by paying the bursts of interfering flows only at the first convergence 
point, and refining the interference patterns for the foi accounting for the limited 
buffer size. Moreover, we aim to handle such an issue for a large panel of wormhole NoCs. 
 
The derived delay bounds are analyzed and compared to available results of existing 
approaches, based on Scheduling Theory as well as Compositional Performance Analysis 
(CPA). In doing this, we have highlighted a noticeable enhancement of the delay bounds 
tightness in comparison to CPA approach, and the inherent safe bounds of our proposal 
in comparison to Scheduling Theory approaches. Finally, we perform experiments on a 
manycore platform, to confront our timing analysis predictions to experimental data 
and assess its tightness. 
 

Contact(s) :

Apvrille Ludovic

Image Retour haut de page