Skip to content

Commit 138a338

Browse files
author
admin
committed
FINAL DRAFT
1 parent c932920 commit 138a338

File tree

5 files changed

+39
-30
lines changed

5 files changed

+39
-30
lines changed

Chapters/chap5_concl.tex

+22-16
Original file line numberDiff line numberDiff line change
@@ -1,29 +1,35 @@
11
\chapter{Conclusions}
2-
In Chapter \ref{sync_and_dist_sys} we said with Palamidessi that no uniform encoding of the synchronous \picalc\ into the asynchronous \picalc preserving reasonable semantics exists. This is a very strong result. Weakening either of the requirements that Palamidessi assumes seems like it might produce an encoding not rigorous enough to study.
2+
In the last chapter we surveyed Palamidessi's separation results, namely that no uniform encoding of the synchronous \picalc\ into the asynchronous \picalc\ that preserves a reasonable semantics exists.
3+
This is a very strong result.
4+
Palamidessi's requirements of uniformity and reasonability both seem like natural properties of any language we'd like to consider.
35
For this reason, the fully expressive synchronous \picalc\ seems like a better candidate for formal study than the asynchronous \picalc.
46

5-
However, we need still consider the implementation of a synchronous \picalc.
6-
On distributed systems, we have only asynchronous sending available to us.
7-
Hence, it also seems useful to study the asynchronous \picalc\ since it models these systems more accurately than a synchronous model.
8-
For the study of distributed systems, rather than showing the asynchronous to be not worth our time, Palamidessi's separation result raises the question of whether we should be considering \emph{synchronous} calculi.
7+
However, we need still consider the implementation of the \picalc.
8+
Asynchronous message passing is far easier to implement on distributed systems.
9+
Hence, it also seems useful to study the asynchronous \picalc\ since it models these systems more naturally than a synchronous model.
10+
For the study of distributed systems in particular, rather than showing the asynchronous to be not worth our time, Palamidessi's separation result raises the question of whether we should be considering \emph{synchronous} calculi.
911

1012
Ideally, we'd like the best of both worlds. The expressiveness of the synchronous \picalc\ allows us to solve a large class of problems much more easily and clearly.
11-
We saw just how useful the synchronous \picalc\ can be for expressing distributed systems in our extended mobile phone network example in the Chapter \ref{Introduction}.
13+
We saw just how useful the synchronous \picalc\ can be for expressing systems in the mobile phone network example of Chapter \ref{Introduction}.
1214
We could have modeled this system in the asynchronous \picalc, but it would have involved a convoluted mess of acknowledgement channels just to express the necessary ordering of events in the system.
13-
Hence, the last chapter looked at some of the more implementation-minded encodings of the synchronous \picalc\ in the asynchronous \picalc, and to what extent we need to relax Palamidessi's requirements to allow these encodings.
15+
Hence, in the last chapter we looked at some of the more implementation-minded encodings of the synchronous \picalc\ in the asynchronous \picalc.
16+
In the process, we saw the extent to which we need to relax Palamidessi's requirements to allow these encodings.
1417

15-
The creators of Pict, the Join-calculus, and other implementations based on the \picalc\ all decided to have their primitives support only asynchronous communication, while synchronous communication is made available overtop of this via a library or higher-level language.
16-
This these greatly simplifies implementation, resulting in a cleaner, more efficient core language.
17-
The summation operator in particular is difficult and expensive to fully simulate.
18-
In the implementation of Pict, for example, David Turner notes \cite{turn96} that ``the additional costs imposed by summation are unacceptable.''.
18+
The creators of Pict, the Join-calculus, and other implementations based on the \picalc\ all decided to have their primitives support only asynchronous communication.
19+
This greatly simplifies implementation, resulting in a cleaner, more efficient language.
20+
The summation operator in particular is difficult and expensive to fully implement.
21+
In the implementation of Pict, for example, David Turner notes \cite{turn96} that ``the additional costs imposed by summation are unacceptable.''
1922
Turner goes on to say that \emph{essential} uses of summation are infrequent in practice.
23+
When summation is needed in Pict, it is made available by use of a library which implements summation using the asynchronous primitives of the Pict language.
2024

2125
Speaking in an interview on developing the \picalc, Robin Milner notes \cite{miln03}:
2226
\begin{quote}
2327
That was to me the challenge: picking communication primitives which could be understood at a reasonably high level as well as in the way these systems are implemented at a very low level...There's a subtle change from the Turing-like question of what are the fundamental, smallest sets of primitives that you can find to understand computation...as we move towards mobility... we are in a terrific tension between (a) finding a small set of primitives and (b) modeling the real world accurately.
2428
\end{quote}
25-
This tension is quite evident in the efforts of process algebraists to find the `right' calculus for modeling distributed systems.
26-
While the synchronous \picalc\ is more elegant and fundamental, actual implementations must commit to asynchronous communication as their primitives.
27-
Hence, which we choose as a model depends in part on our goals.
28-
In any case, it is evident that by limiting ourselves to smaller calculi, many useful new concepts and structures arise in order to solve the problems posed by asynchronous communication.
29-
While these structures might not belong in the `smallest set of primitives', they are useful for bringing the power of the \picalc\ to a model that more closely resembles the implementation of distributed systems.
29+
This tension is quite evident in the efforts of process algebraists to find the `right' calculus for modeling distributed systems. The synchronous and asynchronous $\pi$-calculi are not the only process algebra that have been proposed. Recently, the addition of \emph{locations} or \emph{domains} to \picalc-like languages has garnered considerable attention. Intuitively, locations are sites for computation, which means that each process resides at a particular location. In most of these algebras, processes can move from one location to another via \emph{migration}. Some examples of location-enabled calculi are the $d\pi$-calculus of Hennessey \cite{henn07}, the Join-calculus \cite{fourn00} and the Nomadic Pict language \cite{wojci99}. A broad and high-level overview of these and other calculi is given in \cite{cast02}. Locations are a natural construct for distributed and mobile systems which are often implemented over several computational settings with computation constantly moving between them. Locations are also helpful in the construction of languages focusing on efficiency, failure-tolerance and security since they enable a language to represent locations that differ in their resources, reliability and access privileges. Locations are thus of great use for modeling distributed systems. However, they would certainly not be numbered among the `smallest set of primitives' needed for computation.
30+
31+
We saw this same tension in our comparison of the synchronous and asynchronous $\pi$-calculi.
32+
Which calculus we choose as a model depends in part on our goals.
33+
The synchronous \picalc\ is more elegant and fundamental, and is deserving of the continuing attention it has been given.
34+
However, actual implementations must commit to asynchronous communication as their primitives.
35+
By limiting ourselves to the primitives of the asynchronous \picalc, we may yet discover useful new concepts and structures that solve the problems posed by distributed and mobile systems.

Chapters/front_matter.tex

+3-1
Original file line numberDiff line numberDiff line change
@@ -8,4 +8,6 @@ \section*{A Word About The Format of This Thesis}
88
All references in this pdf are hyper-textual (clickable).
99

1010
\section*{Acknowledgments}
11-
I want to thank my hamsters, Boris Becker, and this bottle of Merlot.
11+
Thanks first and foremost go to my advisor, Jim Fix, for his insights, guidance and willingness to explore the many systems and ideas that led the creation of this thesis. None of this could have happened without his help. I also thank my professors and colleagues for their willingness to read versions of this thesis and provide feedback for my ideas. Finally, a huge portion of this thesis relies directly on the work of Robin Milner and many others who have expanded on what he set in motion. To the extent that anything novel is presented in this thesis, it relies heavily on the great minds that have inspired and challenged my thinking of distributed computing.
12+
13+
I would also like to thank my friends and family for their continuing support, compassion and understanding. In particular I thank my parents, with whom neither my thesis nor the education proceeding it would have been possible.

thesis.pdf

2.88 KB
Binary file not shown.

thesis.tex

+3-2
Original file line numberDiff line numberDiff line change
@@ -16,15 +16,15 @@
1616

1717
%DIFFERING BEHAVOIR FOR PDF VS PRINT OUTPUT
1818
\newcounter{outputmode}
19-
\setcounter{outputmode}{0} %0 for pdf, 1 for print
19+
\setcounter{outputmode}{1} %0 for pdf, 1 for print
2020
\ifthenelse{\equal{\value{outputmode}}{0}}{\usepackage[bookmarks,unicode, colorlinks, linkcolor= darkblue, citecolor= darkblue, pdftitle={The $pi$-calculus}, pdfauthor={William Henderson}]{hyperref}}{\usepackage[bookmarks, colorlinks=true, linkcolor=black, citecolor= black, pdftitle={The $pi$-calculus}, pdfauthor={William Henderson}]{hyperref}}
2121
\ifthenelse{\equal{\value{outputmode}}{0}}{\setlength{\evensidemargin}{0.3in}\setlength{\oddsidemargin}{0.3in}}{}
2222

2323
\titleformat{\section}[hang]{\sffamily\bfseries}
2424
{\Large\thesection}{12pt}{\Large}[{\titlerule[0.5pt]}]
2525

2626

27-
\title{The \picalc}
27+
\title{Distributed and Mobile Systems in The \Picalc}
2828
\author{William Henderson}
2929
% see http://library.reed.edu/help/thesisformat.html for formatting reqs
3030
% The month and year that you submit your FINAL draft TO THE LIBRARY (May or December)
@@ -45,6 +45,7 @@
4545
\input{Chapters/front_matter}
4646
\tableofcontents
4747
\input{Chapters/abstract}
48+
\input{Chapters/dedication}
4849
%dedication?
4950

5051
\mainmatter % here the regular arabic numbering starts

thesis.toc

+11-11
Original file line numberDiff line numberDiff line change
@@ -11,14 +11,14 @@
1111
\contentsline {section}{\numberline {2.3}Reduction Semantics}{13}{section.2.3}
1212
\contentsline {section}{\numberline {2.4}Action Semantics}{15}{section.2.4}
1313
\contentsline {section}{\numberline {2.5}Extended Example: Memory Cells}{19}{section.2.5}
14-
\contentsline {chapter}{Chapter 3: The Synchronous $\pi $-Calculus}{24}{chapter.3}
15-
\contentsline {section}{\numberline {3.1}Syntax and Equivalence}{25}{section.3.1}
16-
\contentsline {section}{\numberline {3.2}Semantics}{26}{section.3.2}
17-
\contentsline {section}{\numberline {3.3}Extended Example: Leader Elections}{28}{section.3.3}
18-
\contentsline {chapter}{Chapter 4: Synchronicity and Distributed Systems}{30}{chapter.4}
19-
\contentsline {section}{\numberline {4.1}Separation Results}{30}{section.4.1}
20-
\contentsline {section}{\numberline {4.2}Encoding Choice}{32}{section.4.2}
21-
\contentsline {section}{\numberline {4.3}A `Bakery' Algorithm}{34}{section.4.3}
22-
\contentsline {chapter}{Chapter 5: Conclusions}{37}{chapter.5}
23-
\contentsline {chapter}{References}{39}{chapter*.4}
24-
\contentsline {chapter}{Index}{41}{chapter*.4}
14+
\contentsline {chapter}{Chapter 3: The Synchronous $\pi $-Calculus}{25}{chapter.3}
15+
\contentsline {section}{\numberline {3.1}Syntax and Equivalence}{26}{section.3.1}
16+
\contentsline {section}{\numberline {3.2}Semantics}{27}{section.3.2}
17+
\contentsline {section}{\numberline {3.3}Extended Example: Leader Elections}{29}{section.3.3}
18+
\contentsline {chapter}{Chapter 4: Synchronicity and Distributed Systems}{31}{chapter.4}
19+
\contentsline {section}{\numberline {4.1}Separation Results}{31}{section.4.1}
20+
\contentsline {section}{\numberline {4.2}Encoding Choice}{33}{section.4.2}
21+
\contentsline {section}{\numberline {4.3}A `Bakery' Algorithm}{35}{section.4.3}
22+
\contentsline {chapter}{Chapter 5: Conclusions}{39}{chapter.5}
23+
\contentsline {chapter}{References}{41}{chapter*.4}
24+
\contentsline {chapter}{Index}{43}{chapter*.4}

0 commit comments

Comments
 (0)