計算機網路特論 Special Topics on Computer Networks

  1. 課程代號: 259028
  2. 課程名稱: 計算機網路特論 Special Topics on Computer Networks
  3. 授課教師: 吳坤熹
  4. 開授年級: 碩、博
  5. 學分數: 3
  6. 授課時數: 3
  7. 先修課程: 無(大學部學生須先修計算機網路)
  8. 課程目標: 精選於計算機網路發展歷史中具有特殊意義的十篇文章,讓學生於閱讀經典中開拓視野。
  9. 評量方式: 口頭報告(55%) 課堂表現(15%) 期中考(15%) 期末考(15%)
  10. 主要教科書: 無
  11. 重要參考文獻: 見課程網頁 http://Course.ipv6.club.tw/NetworkHistory/
  12. 課程進行方式:
    1. 課程前四週由教師就計算機網路之基本架購與演進做一概略之講解。
    2. 之後每週由三位同學就選定之文章進行口頭報告。
      • 第一位同學報告之重點為與該文章相關之通訊協定。
      • 第二位同學報告之重點為選定文章之技術內容。
      • 第三位同學報告之重點為該文章發表後,是否有後續之相關研究。
    3. 其他同學則各自準備一張投影片的簡報,說明你覺得這篇文章好在哪兒(可以用在什麼地方),或是不好在哪兒(有什麼須改進之處)。
  13. 課程內容:
    1. Overview of Computer Networks
    2. "A Minimum Delay Routing Algorithm Using Distributed Computation," [Gallagher 1977]. In this paper, the multipath routing problem (of determining at each node, the fraction of traffic destined for a given destination that should be routed over each of the node’s outgoing links) is formulated as an optimization problem. An iterative, distributed algorithm in which marginal delay information is passed to upstream nodes, which then readjust their routing fractions, is shown to converge to minimize the overall average cost (e.g., delay) in the network. This paper (as well as [Kelly 1998] below) are nice examples of how network protocols (e.g., routing, ratecontrol) can be naturally derived from well-posed optimization problems.
    3. "The Design Philosophy of the DARPA Internet Protocols," [Clark 1988]. This paper provides a thoughtful retrospective view of the goals and design principles of the Internet architecture and its protocols. It has been a favorite among students in my networking classes, and paired with [Molinero-Fernandez 2003] has made for many lively and interesting class discussions.
    4. "A Calculus for Network Delay, Part I: Network Elements in Isolation; Part II: Network Analysis," [Cruz 1991]. During the 1990’s, there was considerable foundational research on providing quality of service guarantees for flows that are multiplexed within the network. This paper describes an elegant "calculus" that provides provable worst-case performance (delay) bounds on per-session, end-end performance. Many important works followed this seminal work; a nice survey is [LeBoudec 2001].
    5. "A generalized processor sharing approach to flow control in integrated services networks," [Parekh 1993]. In many ways a companion paper to [Cruz 1991], this two-part paper demonstrates how provable per-node and end-end persession performance bounds can be guaranteed, given a weighted fair-queueing discipline at each node.
    6. "Equivalent capacity and its application to bandwidth allocation in high speed networks," [Guerin 1991]. The notion of effective bandwidth, an approximate characterization of the queueing behavior of a session when multiplexed with others, was developed by numerous researchers (see, e.g., [Kelly 1996]) throughout the 1990’s. This early paper introduced me to the idea, and sparked my interest in the area.
    7. "On the self-similar nature of Ethernet traffic (extended version)," [Leland 1994]. While the notion of long-range dependency, self-similarity, and heavy-tailed distributions are now a standard part of the lexicon of those interested in traffic characterization and descriptive network models, this paper introduced these ideas widely, launching many subsequent research efforts that have taken such an approach towards modeling.
    8. "Sharing the cost of multicast trees: an axiomatic analysis," [Herzog 1997]. Axiomatic methods, in which one poses a desired set of system properties or behaviors, and then develops a protocol that meets these properties (or proves an impossibility result – that there is no protocol that meets the requirements) is a well-known technique in fields such as mathematical economics and social welfare theory. This paper was an elegant application of this set of tools in the networking domain.
    9. "Rate control in communication networks: shadow prices, proportional fairness and stability," [Kelly 1998]. This paper formulates the rate control (congestion control) problem as a problem of allocating bandwidth to flows so as to optimize overall system "utility," showing that Jacobson’s TCP congestion control protocol (developed 10 years earlier using tremendous engineering insight) can be naturally interpreted as a distributed algorithm that iteratively solves this global optimization problem.
    10. "Multicast-based Inference of Network-Internal Loss Characteristics," [Caceres 1999]. Many research efforts in network measurement through the mid-to-late 1990’s were descriptive in nature – taking active or passive measurements at various points in the network, and interpreting the observed performance (e.g., packet delay, packet loss, aggregate traffic mix, or throughput). This paper elegantly used statistical methods (maximum likelihood estimation) together with end-to-end measurement data to infer the (unseen) topology of the network between the measurement endpoints. Inference techniques have since become an important and widely-used part of the measurement toolkit.
    11. "Internet indirection infrastructure," [Stoica 2004]. It’s been said (in a quote often attributed to Butler Lampson) that nearly every problem in computer science can be solved by adding another level of indirection. I had always thought that this quote applied to data structures and algorithms. This paper, however, opened my eyes to how indirection can be used in an elegant and clean distributed network architecture for providing a variety of overlay services
  14. 達成本系教育目標
    1. 配合國家經濟發展,培養符合資訊產業需求的工程技術人才
    2. 配合國家科技發展,培養具備前瞻資訊科技研發潛能的人才
    3. 配合全球永續發展潮流,培養具備國際視野、工程倫理、人文關懷及社會責任的>科技人才
  15. 培養本系學生核心能力
    1. 具備資訊科學基礎數理知識並應用於發掘、分析與解釋數據的能力
    2. 具備使用英文閱讀資訊領域技術文件及學術論文的能力
    3. 具備團隊合作及獨立執行資訊工程領域學術研究的能力
    4. 具備撰寫學術論文的能力