The DICG20 workshop is co-located with ACM/IFIP Middleware 2020, which takes place on December 8, 2020 online.
This workshop is focussed on distributed infrastructures that enable human interactions and economic activity in general with a focus on the common good. Daily life is transitioning to digital infrastructures, including friendships, education, employment, health-care, finances, family connections, and more. These infrastructures can contribute to the common good enabling us to work together to improve the wellbeing of people in our society and the wider world.
Private ownership of infrastructures does not seem to solve the traditional problems of Tragedy of Commons: pollution (spam and bot network on social media), over-exhaustion of resources (net neutrality), and fairness (gig economy). Privatization of digital commons also introduces the potential for monopolistic abuse, such as: stifled innovation, price discriminations, and distorted market knowledge discovery. We aim to explore within this workshop viable alternatives to 'winner-takes-all' platform ecosystems. Failure of market mechanisms to address these issues suggest that such infrastructures could be treated as commons. We recognize the promising avenue of research build on Nobel laureate Ostroms idea that commons is the third way to organize complex human cooperation, beyond capitalist regulation or governmental regulations.
Scientific challenges include, but are not limited to: the Tragedy of the Commons in such shared-resource systems, fake identities with Sybil attacks, robot economy, trustworthiness in general, self-organizing machine learning, market infrastructures in cashless society, and governance issues in decentralized systems.
This workshop focuses on the tools, frameworks, and algorithms to support the common good in a distributed environment. Both theoretical work and experimental approaches are welcomed. Reproducibility, open source and public datasets are endorsed. Each submission must clearly contribute to the middleware community, to facilitate the development of applications by providing higher-level abstractions for better programmability, performance, scalability, and security.
The topics of interest include, but are not limited to:
Full papers can have a maximum length of 6 pages in the standard, 10pt ACM SIGPLAN format. The page limits include figures, tables, and references. All submitted papers will be judged through double-blind reviewing. Submissions will be handled through HotCRP.
All accepted papers will appear in a Middleware 2020 companion proceedings, which will be available in the ACM Digital Library prior to the workshop. At least one of the authors will have to register for the workshop and present the paper. You can register for the conference through Middleware 2020 registration page.
Pēteris Zilgalvis is the Head of Unit for Digital Innovation and Blockchain in the Digital Single Market Directorate in DG CONNECT and is the Co-Chair of the European Commission FinTech Task Force. He was the Visiting EU Fellow at St. Antony's College, University of Oxford for 2013-14, where was an Associate of the Political Economy of Financial Markets Programme. From 1997 to 2005, he was Deputy Head of the Bioethics Department of the Council of Europe, in its Directorate General of Legal Affairs. In addition, he has held various positions in the Latvian civil service (Ministry of Foreign Affairs, Ministry of Environment).
He was Senior Environmental Law Advisor to the World Bank/Russian Federation Environmental Management Project and was Regional Environmental Specialist for the Baltic Countries at the World Bank. He has been a member of the California State Bar since 1991, completed his J.D. at the University of Southern California, his B.A. in Political Science Cum Laude at UCLA, the High Potentials Leadership Program at Harvard Business School.
This paper analyses the use of blockchain technology to support the governance of commons-pool resources, as studied by Elinor Ostrom. It argues that the technological guarantees of blockchain technology—in terms of ex-ante automation and ex-post verification—can replace the traditional requirements of monitoring and sanctioning. Despite its own limitations and challenges, this novel approach to governance could provide new opportunities for experimentation in the context of commons-pool resources.
Preventing the abuse of resources is a crucial requirement in shared-resource systems. This concern can be addressed through a centralized gatekeeper, yet it enables manipulation by the gatekeeper itself. We present ConTrib, a decentralized mechanism for tracking resource usage across different shared-resource systems. In ConTrib, participants maintain a personal ledger with tamper-proof records. A record describes a resource consumption or contribution and links to other records. Fraud, maintaining multiple copies of a personal ledger, is detected by users themselves through the continuous exchange of records and by validating their consistency against known ones. We implement ConTrib and run experiments. Our evaluation with up to 1'000 instances reveals that fraud can be detected within 22 seconds and with moderate bandwidth usage. To demonstrate the applicability of our work, we deploy ConTrib in a Tor-like overlay and show how resource abuse by free-riders is effectively deterred. This longitudinal, large-scale trial has resulted in over 137 million records, created by more than 86'000 volunteers.
In many regions of the world, nation-states enforce Internet censorship policies that prevent unrestricted access to information and services by their citizens. Over the years, many censorship circumvention tools have been proposed which, however, require either the deployment of a dedicated infrastructure within trusted ISPs, or are vulnerable to state-of-the-art traffic analysis techniques. To fill this gap, we propose to build a practical censorship-circumvention service that exhibits strong resistance against traffic analysis attacks. By relying on a recent proposal for creating covert channels through WebRTC streams, we discuss the design of a distributed system named Censorship-Resistant Overlay Network (CRON). CRON aims at offering to the users located in censored regions a set of services that allow them to locate proxies positioned in the free Internet region, and set up secure covert tunnels for accessing arbitrary sites on the Internet. We present the key challenges and explore the solutions in making CRON robust against state-level attacks.
Smartphones offer a natural platform for building decentralized systems for the common good. A very important problem in such systems is understanding the limitations of building a peer-to-peer (P2P) overlay network, given that today's networking infrastructure is designed with centralized services in mind. We performed measurements over smartphones over several years and collected large amounts of data about, among other things, P2P connection success. Here, we train models of P2P connection success using machine learning based on several features that are observable by the devices. We argue that connection success is a non-trivial function of many such features. Besides this, the predictive models are also rather dynamic and a good model can perform rather badly if it is based on data that is more than a year old. The degree distribution of the P2P network based on this model has an interesting structure. We can identify two modes that roughly correspond to "very closed", and "average" nodes, and a rather long tail that contains relatively open nodes. Our model allows us to perform realistic simulations of very large overlay networks, when combined with device measurement traces. This enables us to have a more informed design of decentralized applications.
For decentralized learning algorithms communication efficiency is a central issue. On the one hand, good machine learning models require more and more parameters. On the other hand, there is a relatively large cost for transferring data via P2P channels due to bandwidth and unreliability issues. Here, we propose a novel compression mechanism for P2P machine learning that is based on the application of stateful codecs over P2P links. In addition, we also rely on transfer learning for extra compression. This means that we train a relatively small model on top of a high quality pre-trained feature set that is fixed. We demonstrate these contributions through an experimental analysis over a real smartphone trace.
Middlewares for building social applications, in which infrastructure is provided by participants, are currently developed by open-source communities. Among those, Secure-Scuttlebutt has pioneered the use of replicated *authenticated single-writer append-only logs*, i.e., chains of ordered immutable events specific to each participant, replicated by gossip algorithms that are driven by social signals, to build eventually-consistent social applications. The use of persistent append-only logs removes parameters traditionally required to be tuned for gossiping. We present two gossip models that can be used for replication: a new *open* model, simpler than the current SSB implementation, that works best in small and trusted groups; and the *transitive-interest* model practically deployed by SSB, that scales to thousands of participants and is spam- and sybil-resistant. We also present limitations of both to motivate further research.
Successful classification of good or bad behavior in the digital domain is limited to central governance, as can be seen with trading platforms, search engines and news feeds. We explore and consolidate existing work on decentralized reputation systems to form a common denominator for what makes a reputation system successful when applied without a centralized reputation authority, formalized in 7 axioms and 3 postulates. Reputation must start from nothing and always reward performed work, respectively lowering and increasing as work is consumed and performed. However, it is impossible for nodes to perform work in a purely synchronous attack-proof work model and real systems must necessarily employ relaxations to such a work model. We show how the relaxations of performing parallel work, allowing unconsumed work and seeding well-known identities with work satisfy our model. Our formalizations allow constraint driven design of decentralized reputation mechanisms.
Dependence of our society on digital infrastructures is growing daily, confronting us with an urgent task of building ethical and democratic alternatives to monopolistic big-tech platforms. We call upon the scientific community to put our talents to this challenge by creating decentralised infrastructures for trust-based economic and social cooperation.
We empirically demonstrate that a public infrastructure to establish trust between peers in decentralized networks is possible at significant scale. Our work is based on over 15 years of improving our distributed systems which were used by more than a million people.
We present six stringent criteria for designing trustworthy infrastructure, called zero-server architecture. Adhering to these principles, we designed a novel trustworthy networking infrastructure, called P2P-Apps. It enables smartphone apps to communicate without zero-server architecture servers, by forming a scalable overlay that uses our generic mechanism to build trust between peers, Trustchain. P2P-Apps are generic and can be expanded to serve as an alternative to centralized infrastructure owned by Big Tech.
Due to the recent developments around COVID-19, DICG 2020 will be held as a virtual event. The workshop is free and public. However, you need to register (free of charge) to Middleware conference using this link.