\documentstyle[11pt,amssymbols]{article}
\topmargin -0.5in % a (add 1 extra inch)
\textheight 9.0in % d
\leftmargin 0.9in
\oddsidemargin 0.1in
\textwidth 6.6in
\parskip 0.050in
\parindent 0.700in
\input{psfig}
\def\B{{\cal B}}
\newtheorem{definition}{Definition}
\newtheorem{theorem}[definition]{Theorem}
\newtheorem{example}[definition]{Example}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\title{\bf EE 290Q Topics in Communication Networks\\ Lecture Notes: 2}
\author{Remco Litjens}
\date{January 18, 1996}
\begin{document}
\maketitle
\bibliographystyle{alpha}
\pagestyle{plain}
\pagenumbering{arabic}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
{\footnotesize
\subsection*{Preface}
The lecture focused on {\em worst-case resource allocation} in a single-node
network. In most of the lecture, a specific traffic regulator (rate controller)
at the source will be assumed ({\em leaky bucket}), and the analysis will be
done for the {\em generalized processor sharing (GPS)} scheduling policy. Both
concepts will be defined in this report. The material is based on an article by
Parekh and Gallager \cite{Parekh} and an article by Cruz \cite{Cruz}.}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Introduction}
In ATM networks, where different traffic streams (users) share resources (links
and switches), a contract is negotiated in the call admission procedure. Such
a contract specifies the quality of service (QoS) that the network guarantees as
well as the network's requirements with respect to the offered traffic characteristics.
The latter is generally implemented and policed using traffic regulators such as
leaky buckets which shape the traffic entering the network.
In the studied paper, Parekh and Gallager \cite{Parekh} show that the use of
generalized processor sharing (GPS), when combined with leaky bucket admission
control, allows the network to make a wide range of worst-case performance
guarantees on throughput and delay. The analysis is done for a single-server
GPS system. In the sequal to this paper (Parekh and Gallager \cite{Parekh_2}),
the results are extended to arbitrary topology networks with multiple nodes
(see also Lecture Notes: 3). In order to simplify the model and analysis, packet
(or cell) streams are assumed to be fluids, rather than discrete entities.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Objectives}
The network performance can be described (from the users' point of view) in various
measures of the quality of service that is offered to users, such as loss rates, delays
(average, variability, maximum), reliability, security, etc.. In contrast, the network
management is more likely to identify revenue as their main performance criterium. Here
we focus on the following measures:
\begin{itemize}
\item {\em end-to-end delay,} consisting of processing, propagation and most
significant: queueing delay;
\item {\em queue lengths} in the node's buffer;
\item {\em delay jitter,} the difference in delay between the slowest and fastest packet
in a session; the delay jitter determines the buffer size needed at the receiving
end in order to make synchronous playback possible in real-time applications.
\end{itemize}
The total delay the user suffers is the sum of the end-to-end delay in the network and the
delay that occurs before the packets (fluid) leave the leaky bucket regulator. Here we will
not consider the delay in the leaky bucket, but our goal is to find upper bounds on the
queueing delays in the network's single node, for each individual connection, given a
scheduling policy and the number of sessions (users), based on a worst-case approach. The
determined upper bound is shown strict in the sense that a traffic pattern exists that realizes
this worst delay.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{The Leaky Bucket - definition}
\begin{figure}[ht]
\centerline{\ \psfig{figure=bucket.epsi,width=5in}}
\caption{\em A leaky bucket.}
\label{f:bucket}
\end{figure}
Figure \ref{f:bucket} depicts the leaky bucket traffic regulator.\footnote{The figure, grabbed
from Parekh and Gallager \cite{Parekh}, shows the additional constraint that fluid may not be
released at a rate greater than $C_i > \rho_i$ for each session $i, \, i = 1,\cdots,N.$ This
constraint is not adopted in this report, nor was it assumed in the corresponding lecture.}
Tokens (or permits) are generated at a fixed rate, $\rho,$ and packets can be released into the
network only after removing the required number of tokens from the token bucket. The bucket
size is denoted $\sigma.$ Note that a leaky bucket rate controller regulates both the traffic
bursts (limited to $\sigma$) and the long-term average traffic rate (limited to $\rho$). In
accordance with the assumption made for packets, we will also assume tokens to be a fluid.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Model}
Consider a single-node network with $N$ connections sharing the available transmission rate
of $C$ Mbps. Denote with $A_i(s,t)$ the amount of fluid that was released into the network
by session $i$ during time interval $[s,t].$ Note that $A_i[s,t]$ depends on both the
characteristics of the traffic as generated at the source and the parameters of the leaky
bucket (or other regulator) that shapes this original traffic stream. Analogously, let
$S_i(s,t)$ be the amount of fluid of session $i$ that is served by the node in time interval
$[s,t].$ $S_i(s,t)$ depends on the workload offered by both session $i$ ($A_i(0,t)$) and the
competing sessions ($A_j(0,t), \, j \not = i$) as well as on the resource allocation policy
that is used. The value of $C$ limits the total amount of fluid processed in time interval
$[s,t]: \; \sum_{i=1}^N S_i(s,t) \le C \cdot (t-s).$
We are interested in the following performance parameters: $Q_i(\alpha) := A_i(0,\alpha)
- S_i(0,\alpha),$ defined as the queuelength (or backlog) of session $i$ at time $\alpha,$
and $D_i(\tau) \, := \, \inf \, \{ \, t \ge \tau \, : \, S_i(0,t) = A_i(0,\tau) \, \} -
\tau,$ the delay of the unit of session $i$'s fluid that arrived at time $\tau.$
\begin{figure}[ht]
\centerline{\ \psfig{figure=graph5.epsi,width=5in}}
\caption{\em $A_i(0,t), \; S_i(0,t), \; Q_i(\alpha)$ and $D_i(\tau).$}
\label{f:graph5}
\end{figure}
Figure \ref{f:graph5} illustrates the introduced concepts and definitions. Obviously,
$A_i(0,t) \ge S_i(0,t)$ for all time $t,$ since the amount serviced cannot exceed the
amount requesting service. Furthermore, the backlog $Q_i(\alpha)$ is given by the vertical
difference between the two curves, and the delay $D_i(\tau)$ by the horizontal difference.
So whenever the the $A_i(0,t)$ and $S_i(0,t)$ curves touch, $A_i(0,t) = S_i(0,t)$ so $Q_i(t)
= D_i(t) = 0$ and session $i$'s queue is empty.
\subsection*{Burstiness Constraints}
A traffic regulator (such as a leaky bucket) imposes a limit on a source's burstiness. This
bound is called a {\em burstiness constraint ${\hat A}_i(\tau),$} defined as the maximum
amount of fluid a source may release into the network in $\tau$ time units. So $\forall \,
s < t, \; A_i(s,t) \le {\hat A}_i(t-s).$ In the case of the leaky bucket regulator
defined above, ${\hat A}_i(\tau) = \sigma_i \, + \, \rho_i \cdot \tau.$
\subsection*{Objectives}
Given the burstiness constraints ${\hat A}_i(\tau)$ for all sessions, we are interested in
two performance measures for each session:
\begin{itemize}
\item {\em worst-case delay:} $D_i^{\ast} := \max_{A_1,\cdots,A_N} \max_t D_i(t);$
\item {\em worst-case backlog:} $Q_i^{\ast} := \max_{A_1,\cdots,A_N} \max_t Q_i(t).$
\end{itemize}
Note that we maximize over all possible arrival patterns and over all time. Parameter
$Q_i^{\ast}$ is of crucial importance in that it determines the minimum buffer size needed
to guarantee zero loss. Clearly $D_i^{\ast}$ and $Q_i^{\ast}$ depend on all traffic patterns,
regulators and the chosen scheduling policy.
\subsection*{Service Curves}
Analogous to the burstiness constraint, bounding fluid arrivals into the network from above,
a service curve ${\hat S}_i(\tau)$ gives a lower bound for the amount of fluid served: the
server guarantees service curve ${\hat S}_i(\tau)$ to session $i$ if for every arrival pattern
$A_1, \cdots, A_N,$ and for every $t$ in a busy period, it is guaranteed that $S_i(s,t) \ge
{\hat S}_i(t-s),$ with $s$ the beginning of the busy period that includes time $t.$
It is easily seen that for each session the burstiness constraint ${\hat A}_i(\tau)$ and the
service curve ${\hat S}_i(\tau)$ define an envelope in which the curves $A_i(0,t)$ and
$S_i(0,t)$ of actual arrivals and services lie (compare figure \ref{f:graph5}). Clearly, the
maximal vertical (horizontal) difference in the envelope gives an upper bound for the
considered performance measures $D_i^{\ast}$ (worst-case delay) and $Q_i^{\ast}$ (worst-case
backlog).
Now that the model is described, we are ready to analyze the performance of a specific
resource allocation policy, called {\em generalized processor sharing (GPS),} which will
be defined in the following section.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Generalized Processor Sharing (GPS) - definition}
Generalized processor sharing (GPS) is a flow-based (and hence dynamic) scheduling discipline
that is efficient, flexible and analyzable, and that therefore seems very appropriate
for integrated services networks. GPS is work conserving and works at a fixed transmission rate,
which we denoted earlier with $C.$ By work conserving, we mean that the server must be busy
if there are packets (fluid) waiting in the system. Within each session, packets are served
according to the {\em first-come, first served (FCFS)} discipline. The policy is characterized
by positive and real parameters $\phi_1,\cdots,\phi_N,$ with $\sum_{i=1}^N \phi_i = 1,$ that
indicate the weigths given to each of the sessions. A GPS server is defined as one for which
\[
{{S_i(s,t)} \over {S_j(s,t)}} \, \ge \, {{\phi_i} \over {\phi_j}}, \; \forall j = 1,\cdots,N,
\]
for any session $i$ that is continuously backlogged in interval $[s,t].$ This rule implies
that a backlogged session $i$ is assigned a transmission rate of at least $\phi_i \cdot C.$
Note that if all sessions are backlogged, equality must hold and the assigned transmission rate
is exactly $\phi_i \cdot C,$ for all $i = 1,\cdots,N.$
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Analysis for the GPS - Leaky Bucket Case}
As seen in the previous section, GPS guarantees each backlogged session a transmission rate
of at least $\phi_i \cdot C$ which implies the service curve ${\hat S}_i(\tau) = \phi_i \cdot
C \cdot \tau$ if session $i$ is continuously backlogged in period $[0,\tau].$ Without loss
of generality, we can choose time $0$ to be the start of the busy cycle under consideration. So
no matter how other sessions misbehave, each session is protected by this guarantee. However,
Parekh and Gallager \cite{Parekh} prove that a tighter service curve exist, which follows
from the work conserving property of the GPS policy. The result is stated in the following
theorem (theorem $3$ in \cite{Parekh}):
\begin{theorem}
Consider a single-node network deploying a GPS scheduling discipline to allocate the available
$C$ Mbps transmission rate over $N$ users, each regulated by a leaky bucket controller with
parameters $(\rho_i,\sigma_i), \; i=1,\cdots,N,$ with $\sum_{i=1}^N \rho_i < C$ (stability
condition). Then, for all sessions $i, \; {\hat S}_i(\tau) = S_i^g(0,\tau),$ where
$S_i^g(0,\tau)$ is the amount of service provided when all sessions are greedy from time $0$
(worst-case scenario of arrival patterns).
\end{theorem}
In terms of the leaky bucket, session $i$ is defined to be greedy if it uses as many tokens
as possible (i.e. sends at maximum possible rate) for all times $t \ge 0.$ In the worst-case,
when the bucket is full at $t = 0,$ session $i$ releases a burst of size $\sigma_i$ at time
$0$ and a constant rate of $\rho_i$ from then on. Note that this worst-case situation is
a feasible scenario, hence the $S_i^g(0,\tau)$ bounds are tight.
Figure \ref{f:evolution} below is included to illustrate the evolution of $S_i(0,t),$ the
actual amount of fluid of session $i$ served in period $[0,t],$ where the origin represents
the beginning of a busy cycle. Let $k_n$ be the $n$-th session whose backlog is cleared and
$e_{k_n}$ the corresponding point in time. Hence in the period $[0,e_{k_1}]$ all sessions
are backlogged and session $i$ can release packets into the network at rate $\phi \cdot C.$
After time $e_{k_1}$ session $k_1$ will need less than $\phi_{k_1} \cdot C,$ hence the
bandwidth that becomes available is distributed over the remaining $N-1$ sessions, according
to their relative weights. In particular, the rate assigned to session $i$ increases and hence
$S_i(0,t)$ becomes steeper. An analogous reasoning can be given for periods $[e_{k_1},e_{k_2}],
\; [e_{k_2},e_{k_3}],$ etc.. At time $e_i,$ session $i$'s backlog is cleared and $i$ is
assigned rate $\rho_i$ thereafter, the rate at which $i$ releases packets under the greedy
strategy. (Note that in the case where leaky bucket rate controllers are used, stability
requires that $\sum_{i=1}^N \rho_i < C,$ but not necessarily $\rho_i \le \phi_i \cdot C, \;
\forall i=1,\cdots,N.$)
\begin{figure}[ht]
% GNUPLOT: LaTeX picture
\setlength{\unitlength}{0.240900pt}
\ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi
\sbox{\plotpoint}{\rule[-0.200pt]{0.400pt}{0.400pt}}%
\begin{picture}(1500,900)(0,0)
\font\gnuplot=cmr10 at 10pt
\gnuplot
\sbox{\plotpoint}{\rule[-0.200pt]{0.400pt}{0.400pt}}%
\put(220.0,113.0){\rule[-0.200pt]{292.934pt}{0.400pt}}
\put(220.0,113.0){\rule[-0.200pt]{0.400pt}{184.048pt}}
\put(220.0,113.0){\rule[-0.200pt]{4.818pt}{0.400pt}}
\put(198,113){\makebox(0,0)[r]{0}}
\put(1416.0,113.0){\rule[-0.200pt]{4.818pt}{0.400pt}}
\put(1416.0,877.0){\rule[-0.200pt]{4.818pt}{0.400pt}}
\put(220.0,113.0){\rule[-0.200pt]{0.400pt}{4.818pt}}
\put(220,68){\makebox(0,0){0}}
\put(220.0,857.0){\rule[-0.200pt]{0.400pt}{4.818pt}}
\put(463.0,113.0){\rule[-0.200pt]{0.400pt}{4.818pt}}
\put(463,68){\makebox(0,0){$e_{k_1}$}}
\put(463.0,857.0){\rule[-0.200pt]{0.400pt}{4.818pt}}
\put(829.0,113.0){\rule[-0.200pt]{0.400pt}{4.818pt}}
\put(829,68){\makebox(0,0){$e_{k_2}$}}
\put(829.0,857.0){\rule[-0.200pt]{0.400pt}{4.818pt}}
\put(950.0,113.0){\rule[-0.200pt]{0.400pt}{4.818pt}}
\put(950,68){\makebox(0,0){$e_i$}}
\put(950.0,857.0){\rule[-0.200pt]{0.400pt}{4.818pt}}
\put(220.0,113.0){\rule[-0.200pt]{292.934pt}{0.400pt}}
\put(1436.0,113.0){\rule[-0.200pt]{0.400pt}{184.048pt}}
\put(220.0,877.0){\rule[-0.200pt]{292.934pt}{0.400pt}}
\put(45,495){\makebox(0,0){$S_i(0,t)$}}
\put(828,23){\makebox(0,0){time t}}
\put(220.0,113.0){\rule[-0.200pt]{0.400pt}{184.048pt}}
%\put(1306,812){\makebox(0,0)[r]{"graph.dat"}}
%\put(1328.0,812.0){\rule[-0.200pt]{15.899pt}{0.400pt}}
\put(220,113){\usebox{\plotpoint}}
\multiput(220.00,113.58)(1.268,0.499){189}{\rule{1.112pt}{0.120pt}}
\multiput(220.00,112.17)(240.691,96.000){2}{\rule{0.556pt}{0.400pt}}
\multiput(463.00,209.58)(0.546,0.500){665}{\rule{0.537pt}{0.120pt}}
\multiput(463.00,208.17)(363.885,334.000){2}{\rule{0.269pt}{0.400pt}}
\multiput(828.58,543.00)(0.499,0.981){241}{\rule{0.120pt}{0.884pt}}
\multiput(827.17,543.00)(122.000,237.166){2}{\rule{0.400pt}{0.442pt}}
\multiput(950.00,782.58)(3.897,0.498){91}{\rule{3.198pt}{0.120pt}}
\multiput(950.00,781.17)(357.363,47.000){2}{\rule{1.599pt}{0.400pt}}
\put(328,200){\makebox(0,0){\footnotesize $\phi_i C$}}
\put(340,73){\makebox(0,0){\tiny all sessions}}
\put(340,43){\makebox(0,0){\tiny backlogged}}
\put(1128,837){\makebox(0,0){\footnotesize $\rho_i$}}
\put(1128,763){\makebox(0,0){\tiny backlog of}}
\put(1128,733){\makebox(0,0){\tiny $i$ is cleared}}
\end{picture}
\caption{\em Evolution of resource allocation within a busy cycle, under the greedy regime.}
\label{f:evolution}
\end{figure}
\noindent
{\bf Proof (sketch):} Suppose session $i$ is in a busy period, i.e. $i$ is backlogged, at
time $t$ and let $s < t$ be the beginning of $i$'s current busy period. Denote with $\B$
the set of all sessions that are backlogged at time $t-s,$ under the greedy regime. We compare
$S_i(s,t)$ and $S_i^g(0,t-s)$ by expressing both quantities in terms of $\B$ and show that
$S_i(s,t) \, \ge \, S_i^g(0,t-s).$ Let for session $i,$ $\sigma_i^t$ be the bit-value of tokens
left at time $t$ {\em plus} the session $i$ backlog at time $t;$ we can think of $\sigma_i^t$
as the maximum amount of session $i$ backlog at time $t^{\tiny +}.$
\noindent
Claim (lemma $10$ in \cite{Parekh}):
\begin{eqnarray*}
S_i(s,t) & \ge & {{\phi_i} \over {\sum_{j \in \B} \phi_j}} \lbrace C \cdot (t-s) \, - \,
\sum_{j \not \in \B} \left[ \sigma_j^s + \rho_j \cdot (t-s) \right] \rbrace \\
& \ge &
{{\phi_i} \over {\sum_{j \in \B} \phi_j}} \lbrace C \cdot (t-s) \, - \,
\sum_{j \not \in \B} \left[ \sigma_j + \rho_j \cdot (t-s) \right] \rbrace \, \ge \, S_i^g(0,t-s)
\end{eqnarray*}
\noindent
The first inequality states that the amount of fluid of session $i$ that is served is at least
a fraction $\phi_i \, / \, \sum_{j \in \B} \phi_j$ of the total amount of fluid that can still
be transmitted for all sessions in $\B,$ which is $C \cdot (t-s)$ minus the resources used for
all sessions not in $\B,$ which is at most $\sum_{j \not \in \B} \left[ \sigma_j^s + \rho_j \cdot
(t-s) \right].$ Similarly, the last inequality states that under a greedy regime, the amount
of fluid served for session $i$ in $[0,t-s]$ is no more than a fraction $\phi_i \, / \,
\sum_{j \in \B} \phi_j$ of $C \cdot (t-s) \, - \, \sum_{j \not \in \B} \left[ \sigma_j + \rho_j
\cdot (t-s) \right],$ wherein $\sum_{j \not \in \B} \left[ \sigma_j + \rho_j \cdot (t-s) \right]$
is the theoreticaly maximal amount of traffic sessions $j \not \in \B$ can release into the network.
Finally, the fact that $\sum_{j \not \in \B} \sigma_j^s \, \le \, \sum_{j \not \in \B} \sigma_j,$
following from theorem 4 in Parekh and Gallager \cite{Parekh} and possible relabeling of sessions
(not proven here - but see example below), completes the proof. \hfill{$\Box$}
$\;$
\noindent
\begin{example}
Fix a time $t$ and let $\tau$ be the beginning of session $1$'s busy period that contains $t.$
We prove that if $\B = \{2,\cdots,N\},$ then $\sigma_1^t \le \sigma_1,$ as an example of
the general case proven in Parekh and Gallager \cite{Parekh}. The fact that $1 \not \in \B$
implies that session $1$ cannot build up a queue of packets in $[\tau,t]$, since more token fluid
will arrive in this period then packet fluid. Hence $Q_i(t) = 0$ and $\sigma_1^t$ equals the amount
of tokens left at time $t$ which is bounded by the bucket size $\sigma_1:$ $\sigma_1^t \le
\sigma_1.$ \hfill{$\Box$}
\end{example}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{thebibliography}{2}
\bibitem{Cruz}
Cruz, R.L. (1995).
Quality of Service Guarantees in Virtual Circuit Switched Networks. {\em
IEEE Journal on Selected Areas in Communications}, August 1995, Vol. 13,
No. 6, pp. 1048-1056.
\bibitem{Parekh}
Parekh, Abhay K. and Gallager, Robert G. (1993).
A Generalized Processor Sharing Approach to Flow Control in Integrated
Services Networks: the Single-Node Case. {\em IEEE/ACM Transactions on
Networking}, June 1993, Vol. 1, No. 3, pp. 344-357.
\bibitem{Parekh_2}
Parekh, Abhay K. and Gallager, Robert G. (1994).
A Generalized Processor Sharing Approach to Flow Control in Integrated
Services Networks: the Multiple Node Case. {\em IEEE/ACM Transactions on
Networking}, April 1994, Vol. 2, No. 2, pp. 137-150.
\end{thebibliography}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\end{document}