You are here

Abstract of IMaCCS 2017

Venkat Anantharam

University of California at Berkeley

Universal lossless compression of graph-indexed data

 

Many data sources arising from social networks, communication networks, and networks describing biological processes are best viewed as indexed by the underlying graphs. These graphs are often sparse, i.e. the number of edges is on the same scale as the number of vertices. The local weak limit theory for sparse graphs provides a framework in which to make sense of what one might mean by a stationary process indexed by graphs when the graph is sparse. Just as the theory of stochastic processes indexed by time has been the engine driving the development of control theory, communications, information theory and signal processing, one might expect that a theory of stationary stochastic processes indexed by graphs, would eventually have a similarly wide ranging impact.


We illustrate the theory and this viewpoint by discussing the problem of lossless data compression of graph structured data. Consider a sparse graph whose vertices and edges carry marks, viewed as data samples. We pose the problem of representing the information content of such a marked graph in the most efficient way possible. Further, we would like to do this in a universal sense, i.e., in effect, without making any statistical assumptions about the data being compressed. It turns out that one can do this, and what is more, the compression can be done in the most efficient way possible. Namely, one can make sense of a notion of entropy associated to the local weak limit (due to Bordenave and Caputo) which is on a linear scale in the number of nodes, and one can compress down to this entropy in a universal sense.

 

(joint work with Payam Delgosha)

Tamer Basar

University of Illinois at Urbana-Champaign

Strategic Information Transmission

 

Strategic information transmission refers to a variation (and a substantial one) of the standard paradigm of information transmission in communication (design of an encoder and a decoder in unison to minimize some distortion measure), where now the encoder and the decoder have (intentionally) misaligned objectives. This leads to a non-cooperative game with a dynamic (non-classical) information structure, where one can adopt as a solution concept either the Nash or the Stackelberg equilibrium. This talk will introduce this class of problems, which have been of interest to multiple communities, including economics, information theory, communication, signal processing, and control, since the early 1980s, having picked up considerable steam very recently. As an overview of the topic, both old and new results will be presented, with one of the highlights (and perhaps a surprising element) being that there appears to be a major difference between the structures of the encoders and the decoders under Nash and Stackelberg equilibria, even when the channel is Gaussian and the (misaligned) distortion measures are quadratic. Extensions to multi-stage scenarios, information transmission problems with an element of deception, and SIT problems with privacy constraints will also be discussed.

(based on joint work with Emrah Akyol and Cedric Langbort)

Charles Bouman

Purdue University

Consensus Equilibrium: Regularized Inversion without Optimization

 

Regularized inversion is widely used in image reconstruction due to its tractability and its ability to combine complex physical sensor models with useful regularity criteria. However, the need to formulate regularized inversion as the solution to an optimization problem severely limits the expressiveness of regularity conditions and constrains the methods that can be used to compute the solution. In fact, methods such as Plug-and-Play Priors that abandon the optimization framework have recently been shown to achieve much better quality results than traditional optimization methods in important problems.

In this talk, we introduce the concept of Consensus Equilibrium (CE) as a generalization of regularized inversion. The CE solution agrees with classical regularized inversion when the model is formulated as an optimization problem, but CE is more general in that models need not be formulated as cost functions to be minimized. In the CE framework, the problem of MAP estimation in regularized inversion is replaced by the problem of solving equilibrium equations, which can be approached in multiple ways. We present the Douglas-Rachford (DR) algorithm for computing the CE solution and prove the convergence of this algorithm under conditions that include convex MAP reconstruction as a special case. While the DR algorithm represents just one approach to solving the CE equations, we show that the DR algorithm is a generalization of the ADMM algorithm used for the Plug-and-Play method, and we demonstrate that the DR algorithm can have faster convergence than ADMM in some cases. We conclude with some examples of the use of DR/CE on experimental problems, including tomographic reconstruction for X-ray CT and super-resolution interpolation.

Suhas Diggavi

University of California at Los Angeles

An information theoretic to nanopore sequencing

 

While DNA sequencing technology has undergone a major revolution, the read lengths afforded by most current generation sequencing technologies still remain small (in the range of hundreds of DNA base-pairs). These short reads are then stitched together with algorithms that exploit the overlap between these reads in order to reconstruct the DNA. Long repeated regions of the DNA greater than this short read length, which are common, are not resolvable with this technology, requiring sequencers capable of accurate long reads. Nanopore sequencing promises to address this problem, by increasing the read lengths by orders of magnitude (up to 10K-100K bases). However, nanopore sequencers built to date, have high error rate, which if unresolved could limit their applications. There are many algorithmic challenges in realizing this potential, due to many
transformations between the DNA nucleotide and the the nanopore current. Impairments in nanopore sequencers include inter-symbol interference (4 or more bases affect each observation), random channel variations, insertions/deletions and noise. In this talk we develop signal-level mathematical models for the nanopore channel, which allows us to develop information theoretic bounds for its decoding capability. We will apply these to some experimental nanopore sequencer data to develop some preliminary understanding of the trade-offs in their performance.

This talk is joint work with Wei Mao, Sreeram Kannan and Jens Gundlach.

P. R. Kumar

Texas A&M University

The prices of packets and watts: Optimal control of decentralized systems

 

We examine two problems of  contemporary interest, both involving the optimal operation of distributed stochastic dynamic systems, one in power systems and another in communication networks.

In power systems, we address the  problem faced by an Independent System Operator which has to optimally utilize a mix of fossil fuel based and renewable generators,  controllable and uncontrollable loads, prosumers and storage services, so as to maximize social welfare, without knowing details of any of the entities involved. We revisit general equilibrium theory, addressing the complexity raised by dynamic stochastic agents with privacy concerns. We show how optimality can be attained through stochastic prices discovered by iterative tatonnement mechanisms.

In communication networks we address the problem of how to optimally schedule multiple flows with hard end-to-end deadlines over a network of unreliable links with average power constraints so as to maximize their throughput. We exhibit an exactly optimal policy for this distributed system that employs easily determined prices to distributedly schedule link transmissions throughput the entire network.

[Joint work with Rahul Singh and Le Xie].

Muriel Medard

Massachusetts Institute of Technology

Coding for computation

 

We overview some recent results in distributed functional compression, where we compress for functions rather than for recovering the original data. These results present a natural extension of network source coding when the primary purpose is to recover functions of data. We investigate some implications for caching and security, and overview open problems in the area.

Eytan Modiano

Massachusetts Institute of Technology

A New Look at Optimal Control of Wireless Networks


We address the problem of throughput-optimal packet dissemination in wireless networks with an arbitrary mix of unicast, broadcast, multicast and anycast traffic. We start with a review of the seminal work of Tassiulas and Ephremides on optimal scheduling and routing of unicast traffic, i.e., the famous backpressure algorithm. The backpressure algorithm maximizes network throughput, but suffers from high implementation complexity, and poor delay performance due to packets looping inside the network. Moreover, backpressure routing is limited to unicast traffic, and cannot be used for broadcast or multicast traffic.
We will describe a new online dynamic policy, called Universal Max-Weight (UMW), which solves the above network flow problems simultaneously and efficiently. To the best of our knowledge, UMW is the first throughput-optimal algorithm for solving the generalized network-flow problem. When specialized to the unicast setting, the UMW policy yields a throughput-optimal, loop-free, routing and link-scheduling policy. Extensive simulation results show that the proposed UMW policy incurs substantially smaller delays as compared to backpressure.

Alon Orlitsky

University of California at San Diego

Everywhere Optimal Probability Estimation

 


Estimating distributions over large alphabets is a fundamental machine-learning tenet. Yet no method is known to estimate all distributions well. For example, add-constant estimators are nearly min-max optimal but often perform poorly in practice, and practical estimators such as absolute discounting, Jelinek-Mercer, and Good-Turing are not known to be near optimal for essentially any distribution. We describe universally near-optimal probability estimators. For every discrete distribution, they are provably nearly the best in the following two competitive ways. First they estimate every distribution nearly as well as the best estimator designed with prior knowledge of the distribution up to a permutation. Second, they estimate every distribution nearly as well as the best estimator designed with prior knowledge of the exact distribution, but as all natural estimators, restricted to assign the same probability to all symbols appearing the same number of times. Based on joint work with Ananda Suresh. 
 

Pablo Parrilo

Massachusetts Institute of Technology

Graph Structure in Polynomial Systems: Chordal Networks

 

The sparsity structure of a system of polynomial equations or an optimization problem can be naturally described by a graph summarizing the interactions among the decision variables. It is natural to wonder whether the structure of this graph might help in computational algebraic geometry tasks (e.g., in solving the system). In this talk we will provide an introduction to this area, focused on the key notions of chordality and treewidth, which are of great importance in areas such as numerical linear algebra, database theory, constraint satisfaction, and graphical models. In particular, we will discuss “chordal networks”, a novel representation of structured polynomial systems that provides a computationally convenient decomposition of a polynomial ideal into simpler (triangular) polynomial sets, while maintaining its underlying graphical structure. As we will illustrate through examples from different application domains, algorithms based on chordal networks can significantly outperform existing techniques. Based on joint work with Diego Cifuentes (MIT).

Yannis Paschalidis

Boston University

Inverse Equilibrium Problems and Price-of-Anarchy Estimation in Transportation Networks

 

Equilibrium modeling is common in a variety of fields such as game theory, transportation science, and systems biology. The inputs for these models, however, are often difficult to estimate, while their outputs, i.e., the equilibria they are meant to describe, are often directly observable. By combining ideas from inverse optimization with the theory of variational inequalities, we develop an efficient, data-driven technique for estimating the parameters of these models from observed equilibria. A distinguishing feature of our approach is that it supports both parametric and nonparametric estimation by leveraging ideas from statistical learning.

We apply this general framework to transportation networks. Using real traffic data from the Boston area, we estimate origin-destination flow demand matrices and the per-road cost (congestion) functions drivers implicitly use for route selection. Given this information, one can formulate and solve a system-optimum problem to identify socially optimal flows for the transportation network. The ratio of total latency under a user-optimal policy versus a system-optimal policy is the so-called Price-of-Anarchy (POA), quantifying the efficiency loss of selfish actions compared to socially optimal ones. We find that POA can be quite substantial, sometimes exceeding 2, suggesting that there is scope for control actions to steer the equilibrium to a socially optimal one. We will discuss what some of these actions may be and how to prioritize interventions.

Tomaso Poggio

Massachusetts Institute of Technology

When Can Deep Networks Avoid the Curse of Dimensionality

 

A mathematical theory of deep networks and of why they work as well as they do is now emerging. I will review some recent theoretical results on the approximation power of deep networks including conditions under which they can be exponentially better than shallow learning. A class of deep convolutional networks represent an important special case of these conditions, though weight sharing is not the main reason fortheir exponential advantage. Implications of a few key theorems are discussed, together with new results, and conjectures on generalization properties of deep networks trained with Stochastic Gradient Descent.
 

Kameshwar Poolla

University of California at Berkeley

The Sharing Economy for the Smart Grid

 

The sharing economy. It is all the rage. Going on vacation? Rent out your home for extra income! Have space in your car? Pick up passengers for extra income! Companies such as AirBnB, VRBO, Lyft, and Uber have disrupted housing and transportation sectors. Their innovative business models are based on resource sharing that leverage underutilized infrastructure. Are there compelling sharing economy opportunities in the electricity sector? What products can be shared in tomorrow's Smart Grid?

We begin by exploring sharing economy opportunities in the electricity sector. We discuss regulatory and technical challenges to these opportunities. We then study the specific problem of a collection of firms sharing their electricity storage. We show that the investment decisions of the firms form a Nash equilibrium which supports the social welfare. We discuss control technology platforms necessary for the physical exchange of power, and market platforms necessary to trade electricity storage.

Finally, we explore the promise of trading excess PV generation in a sharing economy. We argue that this approach encourages investment in renewables, without imposing unsustainable tariff structures such as net-metering. We suggest that a location-based solar subsidy policy can maximize the social welfare of PV producers.

R. Srikant

University of Illinois at Urbana-Champaign

Approximate Graph Matching

 

We consider an abstraction of the network deanonymization problem, where the goal is to infer the node identities in an anonymized graph by observing the node identities and the topology of a correlated graph. More precisely, the goal is to label the nodes of the anonymized graph so that the adjacency matrix of the anonymized graph matches the adjacency matrix of the correlated graph as closely as possible. We will review prior results on this problem for the case of Erdős–Rényi graphs, and present some new results for stochastic block models. Joint work with Joseph Lubars.

Leandros Tassiulas

Yale University

Cooperation and competition on exchange of network connectivity services

 

The rapid increase of mobile data poses new challenges to wireless network designers that explore alternate means for wireless connectivity exploiting the proliferation of networks in unlicensed parts of the spectrum as well as of handheld devices with multiple radio interfaces. We will present schemes for the creation of user provided networks where a mobile user may gain network access when another user with a direct network connection is willing to relay its traffic received through a device-to-device link between the users. We will focus on incentives mechanisms that facilitate the creation of such User Provided Networks in a way that all participants gain in terms of access capacity as well as energy consumption.